Reading Group

Tue, Feb 04, 2025 (13:00 JP:B-633) Zhengyou Zhang: A flexible new technique for camera calibration. In Pattern Analysis and Machine Intelligence, IEEE Transactions on, 22(11):1330–1334, 2000. As many robotic tasks employ camera calibration algorithm from OpenCV built upon the PAMI 2000 paper by Zhengyou Zhang, we would like to discuss the method in detail, together with edge cases and other implications. A detailed description can also be found in the technical report of the author. The paper will be presented by Pavel Krsek. (link) (report)

The regular ROP reading group slot is Tuesday 13:00 — 14:00 at JP:B-633 (alternates with seminars). The reading group organizer is Karla Štěpánová.

Past reading groups

  • Tue May 9, 2023 (13:00 JP:B-633): Lightweight probabilistic deep networks (CVPR, 2018, article)
    • Gast, Jochen, and Stefan Roth. “Lightweight probabilistic deep networks.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
    • Presenter: Ondrej Holešovský
    • Abstract: In this seminar, we will discuss a paper on Probabilistic deep networks and Ondra will also show an example of its running implementation. Even though probabilistic treatments of neural networks have a long history, they have not found widespread use in practice. Sampling approaches are often too slow already for simple networks. The size of the inputs and the depth of typical CNN architectures in computer vision only compound this problem. Uncertainty in neural networks has thus been largely ignored in practice, despite the fact that it may provide important information about the reliability ofpredictions and the inner workings of the network. In this paper, we introduce two lightweight approaches to making supervised learning with probabilistic deep networks practical: First, we suggest probabilistic output layers for classification and regression that require only minimal changes to existing networks. Second, we employ assumed density filtering and show that activation uncertainties can be propagated in a practical fashion through the entire network, again with minor changes. Both probabilistic networks retain the predictive power of the deterministic counterpart, but yield uncertainties that correlate well with the empirical error induced by their predictions. Moreover, the robustness to adversarial examples is significantly increased.
    • Presentation: Link to the presentation, pdf
  • Tue March 21, 2023 (13:00 JP:B-633): Ontologies for autonomous robotics (ROMAN, 2017, article), KnowRob 2.0 (ICRA, 2018, article)
    • Presenter: Radoslav Skoviera
    • Presentation: Link to presentation
    • Abstract: In this seminar, we will discuss two papers that describe the usage of ontologies for autonomous robotics and human-robot interaction. The first one (Ontologies for autonomous robotics, ROMAN, 2017, article) presents a proposal of the ontology standard for autonomous robotics. The second one (KnowRob 2.0, ICRA, 2018, article) shows how an ontology built on this standard might be used for applications in human-robot interaction scenarios. For those who are interested in more details, we recommend to check also the following two survey papers: 1) Ontology-Based Knowledge Representation in Robotic Systems: A Survey Oriented toward Applications (Applied Sciences, 2021) and 2) A Review and Comparison of Ontology-based Approaches to Robot Autonomy (The Knowledge Engineering Review, 2019).
  • Tue Jan 31, 2023 (13:00 JP:B-633): Learning and executing reusable behavior trees from natural language instructions (RAL + IROS, 2022, article)
    • Presenter: Karla Štěpánová,
    • Presentation: Link to presentation
    • Abstract: In this letter, we demonstrate how behaviour trees, a well-established control architecture in gaming and robotics, can be used in conjunction with natural language instruction to provide a robust and modular control architecture for instructing autonomous agents to learn and perform novel complex tasks. We also show how behaviour trees generated using our approach can be generalised to novel scenarios, and can be re-used in future learning episodes to create increasingly complex behaviours. We validate this work against an existing corpus of natural language instructions, demonstrate the application of our approach on both a simulated robot solving a toy problem, as well as two distinct real-world robot platforms, which, respectively, complete a block sorting scenario, and a patrol scenario.
  • Thu October 6, 2022 (13:00 JP: B-633)
    • Perceive, Represent, Generate: Translating Multimodal Information to Robotic Motion Trajectories (IROS, 2022, pdf)
    • Presenter: Gabriela Šejnová, Presentation.
  • Thu April 21, 2022 (13:00 JP:B-633)
    • Action-conditioned 3D Human Motion Synthesis with Transformer VAE (ICCV, 2021, pdf)
    • Presenter: Gabriela Sejnova and Karla Štěpánová (ROP), Presentation.
    • Abstract: In this seminar, we want to focus on how transformers might be utilized for learning from demonstration. We aim to understand the underlying structure of transformers in a more detail. We will mainly focus on the “Action-conditioned 3D Human Motion Synthesis with Transformer VAE”. The original paper deals with the action-conditioned generation of realistic and diverse human motion sequences. We utilized this code for teaching a Pepper robot simple actions demonstrated by human. We are interested in the ways how the architecture might be extended to deal 1) few demonstrated examples, 2) multimodal data.