5 minute read

I am delighted to share that I have successfully defended my PhD thesis, marking the culmination of a rigorous and rewarding research journey. This achievement opens a new chapter, driven by a commitment to advancing research in applied AI, particularly in the fields of robotics and Explainable AI (XAI), and exploring its significant potential for industry.

A Successful Defense and Deep Gratitude

After several years of intensive research, I successfully defended my PhD thesis focusing on competence modeling in Human-Robot Cooperation. This work explored how autonomous systems, particularly robots, can better understand their own capabilities and the capabilities of their human partners to achieve common goals more effectively. My dissertation addressed three core areas:

  • Robot Self-Assessment: Developing methods to estimate a robot’s own classification accuracy, even with limited labeled data, by using semi-supervised techniques and confidence modeling.
  • Assessing the Human Teacher: Creating models to gauge the teacher’s domain knowledge and optimize the learning process by prioritizing samples that are uncertain but valuable for the human to label, thereby accelerating training.
  • Effective Human-Robot Collaboration: Introducing an active learning interface that uses dimension-reduction techniques to visualize deep image feature spaces. This allows non-expert humans to label robot object recordings faster and more accurately, addressing the challenge of non-i.i.d. training data that arises from such interactive teaching.

I would like to express my sincere gratitude to the University of Bielefeld, especially the Neuroinformatics Group led by Helge Ritter, for providing a fantastic academic environment. A special thank you goes to the Honda Research Institute and my supervisor, Heiko Wersing, for their invaluable guidance, support, and the opportunity to conduct research at the intersection of fundamental AI and real-world robotics challenges.

The Promise of Applied AI in Robotics

My thesis underlined the fact that autonomous robots will increasingly become part of our daily lives, but the transition to fully automated tasks is still incomplete. This necessitates a strong focus on Human-Robot Cooperation, where the human remains “in the loop.”

The core challenge in this cooperation is not just about making the robot smarter, but making the interaction smarter. This requires the robot to possess a sophisticated model of “competence”—a continuous assessment of both its own and its human partner’s current abilities relative to the shared objective. This is where applied AI, using techniques like active and incremental learning, becomes critical for building truly capable and adaptable robotic systems.

Focusing on Explainable AI (XAI)

Looking ahead, I see Explainable AI (XAI) as one of the most critical and opportunity-rich areas for the next generation of AI systems, particularly in robotics.

If a robot is going to work alongside a human, not only must its decisions be robust, but they must also be transparent and understandable. If a robot fails or makes an unexpected move, the human partner needs to understand why. XAI is the field dedicated to addressing this need by making AI systems less of a ‘black box.’

For instance, when a human is teaching a robot (as in my thesis), understanding the boundary of the robot’s current knowledge (its competence) is key to the efficiency of the training. This is a form of explainability. Moving forward, I am deeply committed to continuing research in XAI, focusing on methods that make complex robotic and perception-based AI systems interpretable to the end-user.

The Next Step: Research Abroad

For the immediate future, my plan is to continue this trajectory of research, but to do so in an international setting. I am actively seeking research opportunities abroad where I can contribute to and collaborate on projects focused on extending the state-of-the-art in Explainable AI. I believe that exposure to different research cultures and collaborative networks is essential for making significant progress in this rapidly evolving domain.

Opportunities in Industry

Beyond fundamental research, I firmly believe that the principles of XAI and robust competence modeling hold immense potential for industrial application.

In sectors like advanced manufacturing, complex logistics, and autonomous inspection, AI systems are increasingly deployed in safety-critical or high-stakes environments. Here, the ability to audit, verify, and trust the AI’s output is not a luxury—it is a requirement. XAI techniques offer a path to unlocking the broader adoption of advanced AI by providing the necessary transparency for regulatory compliance, risk assessment, and operational troubleshooting.

I see a clear market demand for engineering expertise that can bridge the gap between complex AI models (like Deep Learning) and the practical need for robust, transparent, and user-friendly systems in the enterprise. The skills honed during my PhD in building and testing deployable AI models, focusing on the quality of human interaction, are directly applicable to solving these challenges for future industry partners.

Categories:

Updated: