I am an Assistant Professor at St.Olaf College, where I teach and conduct research in the field of Artificial Intelligence. I received my Ph.D. from Wright State University. My research focus is on developing intelligent autonomous agents using a variety of Artificial Intelligence methodologies. Specifically, my research interests include goal reasoning, machine learning, natural language processing, automated planning and monitoring, explainable AI, anomaly detection and handling, metacognition, and cognitive architectures.
Ph.D. in Computer Science • August 2017 - December 2021
Master's Degree in Computer Science • August 2015 - July 2017
Bachelor's in Electronics and Communication Engineering• May 2011 - April 2015
Assistant Professor• February 2022 - Present
I am responsible for teaching a diverse range of courses in the undergraduate curriculum, including both introductory and advanced courses. In addition to my teaching responsibilities, I also conduct research in the field of Artificial Intelligence. In this capacity, I aim to contribute to the advancement of the field and to the development of intelligent autonomous systems that can reason, learn, and adapt to their environments. In addition to my research, I also enjoy mentoring and guiding students, and have had the opportunity to work with several students in various capacities. Whether through my teaching, research, or mentorship, I am committed to helping students learn and succeed in their academic and professional endeavors.
Graduate Research Assistant • May 2016 - December 2021
My research focus is on developing intelligent autonomous agents that are able to reason about their goals and the actions required to achieve them. This involves studying goal reasoning and decision dynamics in autonomous systems, as well as using probabilistic and statistical models to detect and respond to anomalies. In my work, I have published a number of papers on the topics of goal reasoning and goal operations in various outlets, including journals, international conferences, national conferences, and workshops. Through these publications, I have gained a significant amount of expertise in these areas. Ultimately, my research aims to advance the field of Artificial Intelligence and contribute to the development of intelligent autonomous systems that are able to manage their own goals and make decisions based on their environments and the needs of their users.
Instructor • August 2019 - December 2019
I have designed and taught the course Introduction to Computer Programming (CS1160), which is aimed at introducing students to the fundamental concepts and skills of computer programming. In this role, I have been responsible for planning and delivering the coursework, as well as evaluating student progress and providing feedback. In addition to my teaching responsibilities, I have also mentored several undergraduate students and two graduate teaching assistants (GTAs) in various capacities. This has involved providing guidance and support to help students succeed in their academic and professional endeavors, and encouraging them to pursue their interests in computer science and related fields.
Independent Study • April 2016 - July 2016
In my research, I have focused on the use of natural language processing and data mining techniques to study gender bias on the Rate My Professor website. To gather data, I used web scraping tools to collect reviews and ratings from the website, and then processed the collected data using the Stanford Natural Language Processing (NLP) parser, implemented in the R programming language. The NLP parser allowed me to analyze the language used in the reviews and ratings in order to detect any patterns or trends that might suggest the presence of gender bias. Overall, this research project aimed to shed light on the issue of gender bias in academia and to explore the use of natural language processing and data mining techniques to study such biases.
Team member • January 2019 - December 2021
An autonomous underwater agent(Slocum Glider) named Grace tries to find the highest density of tagged fish(hotspots) in the entire region. The gray area around Grace is the acoustic receiver, and the red dots are tagged fish. The fish emit an acoustic signal with a specific time-frequency. However, Grace, through just a control architecture, to control its physical system cannot achieve such a complex goal of surveying and finding hotspots. Instead, it also needs a high-level cognition for surveying the region(Gray's Reef National Marine Sanctuary). ...
While Grace surveys the region to find hotspots, it encounters several anomalies in the domain. Some of the anomalies include blockades, Remora attacks. Specifically, blockades try to hinder the agent's movement by blocking its path either partially or fully. The partial blockades(green lines): the agent can pass through them (either by diving up or down in water). However, the agent cannot pass through the full blockades(red lines). Apart from the blockades, there are Remora Attacks. Remora is a type of fish that latch onto Grace and reduce its forward speed, thus hindering Graces' ability to achieve its goal of finding a hotspot.
Grace can only be categorized as smart when it handles such anomalies with minimal to no human intervention. So, Grace performs high-level decision-making through MIDCA Cognitive Architecture while also communicating with the Control Architecture of Grace. The Cognitive Architecture uses percepts from the control signals, uses those percepts to perform state-space planning, and generates actions for Grace. Finally, Since the physical system of Grace cannot understand planning actions, we convert them to control signals for Grace to execute in the real world.
Finally, in the demo, we can observe the Graces' response to the blockade at 0.18, 0.35, and several other instances throughout the video. In comparison, we observe Remora's responses as Long pauses at 0.22, 0.43, and several other instances.
Team member • August 2017 - December 2018
An autonomous agent(Remus 100) named Remus tries to clear all the mines(Green triangles) in specified locations(Octagons/ Gren Areas). The gray area around the Remus is the range of the sonar sensor. Remus uses a sonar sensor to detect mines. However, Green areas designated by humans are incomplete. Mines are also present in other important areas (Wide rectangle) apart from just the octagons. Therefore, Remus needs to generate new goals(from data obtained through its perception) in addition to the goals provided by a human. ...
Remus must be competent in generating such new goals due to its resource limitations(battery life of Remus). Therefore, Remus must not generate a new goal for every anomalous mine. Instead, Remus reasons about the anomalous mines, postulate potential reasons for the mine to be present in the area(Ex. Enemy submarine, enemy ship, aerial vehicle). It then proposes potential threats from the mines to itself and other friendly agents in the environment and finally generates goals to remove mines that Remus perceives to be a threat.
In this demo, Remus uses percepts from the real world, state-space planner, explanation, goal-reasoning, control signals. We observe Remus generating new goals for mines outside of Green Areas at several instances in the video.
Team member • Jan 2017 - July 2017
Any robot working in the real world must perform three types of actions. First, it should perceive the world around it. Second, from the percepts, try and understand the world. Third, it should execute specific tasks in the world as per its ability. The robot performing all three actions in the demo is a Baxter. ...
Baxter robot receives sensory signals from the real world using cameras and microphones. It has inbuilt cameras on its face, two palms of its hands, additional Asus Kinect external camera on its waist. From all the cameras, we receive 2-dimensional images and 3-dimensional point cloud images. We use Convolutional Neural Networks to process the data and detect real-world objects. In addition, Baxter has an inbuilt microphone. We use Sphinx to convert the speech commands to text.
From the percepts, Baxter attempts to perform mental actions. Baxter performs mental actions using a Cognitive Architecture called MIDCA. In this scenario, it tries to understand the tic-tac-toe game. Its initial mental motive is to win, but it changes its goal and shoots for a draw when winning seems impossible.
Finally, Baxter performs actions in the real world using its arms. Its arms have seven degrees of freedom. Baxter uses Robot Operating System (ROS) and Inverse Kinematics to perform real-world actions using its end effectors.
Team member • May 2016 - December 2016
Any robot working in the real world must perform three types of actions. First, it should perceive the world around it. Second, from the percepts, try and understand the world. Third, it should execute specific tasks in the world as per its ability. The robot performing all three actions in the demo is a Baxter. ...
Baxter robot receives sensory signals from the real world using cameras and microphones. It has inbuilt cameras on its face, two palms of its hands, additional Asus Kinect external camera on its waist. From all the cameras, we receive 2-dimensional images and 3-dimensional point cloud images. We use Convolutional Neural Networks to process the data and detect real-world objects. In addition, Baxter has an inbuilt microphone. We use Sphinx to convert the speech commands to text.
From the percepts, Baxter attempts to perform mental actions. Baxter performs mental actions using a Cognitive Architecture called MIDCA. In this scenario, it tries to stack the red block on the green block. When the world changes, and it perceives that it can no longer pick up the red block. The agent changed its initial plan and achieved the goal.
Finally, Baxter performs actions in the real world using its arms. Its arms have seven degrees of freedom. Baxter uses Robot Operating System (ROS) and Inverse Kinematics to perform real-world actions using its end effectors.
Team member • March 2016 - December 2021
The above figure depicts a cognitive architecture called Meta-cognitive Integrated Dual Cycle Architecture (MIDCA). It has two layers, a cognitive layer that works in the real world and a Meta-cognitive layer that works on the cognitive layer. Each layer has six different modules that perform six distinct operations. ...
The cognitive layer observes the world through the "Perceive Phase." It then attempts to understand the percepts and generate goals from percepts in the "Interpret phase." Next, it tracks the completion of the goal in the "Evaluate phase." If there are multiple goals, then the agent tries to choose and prioritize goals in the "Intend phase." Next, it plans for the selected goals in the "Plan phase." Finally, it takes each action in plan and applies it in the real world in the "Act phase."
The meta-cognitive phases are the same as the cognitive phases, but they work on the cognitive layer instead of the real world.
Grants 2016 - 2021 Worked under several prestigious grants: NSF 1849131; ONR N00014-18-1-2009; AFOSR FA2386-17-1-4063.
Start Up 2018 Our start up idea, SquadUp, won the October 2018 Hackathon conducted by YCombinator with 250 participants across 80 projects.
[ACS 2022] Kondrakunta, S., Gogineni, V. R., & Cox, M. T. (In press). Agent Goal Management using Goal Operations. In the Tenth Advances in Cognitive Systems Conference-2022.. Cognitive Systems Foundation.
[ACS 2022] Gogineni, V. R., Kondrakunta, S., & Cox, M. T. (In press). Multi-agent Goal Delegation. In the Tenth Advances in Cognitive Systems Conference-2022.. Cognitive Systems Foundation.
[GRW 2021] Kondrakunta, S., Gogineni, V. R., & Cox, M. T. (2021, December). Agent Goal Management using Goal Operations. In the Ninth Goal Reasoning Workshop at Advances in Cognitive Systems Conference-2021. Cognitive Systems Foundation.
[GRW 2021] Gogineni, V. R., Kondrakunta, S., & Cox, M. T. (2021, December). Multi-agent Goal Delegation. In the Ninth Goal Reasoning Workshop at Advances in Cognitive Systems Conference-2021. Cognitive Systems Foundation.
[ACS 2021] Yuan, W., Munoz-Avila, H., Gogineni, V. R., Kondrakunta, S., Cox, M. T., & He, L., (in press). Task Modifiers for HTN Planning and Acting. In the poster presentation at the Ninth Annual Conference on Advances in Cognitive Systems. Cognitive Systems Foundation.
[ACS 2021] Kondrakunta, S., Gogineni, V. R., Cox, M. T., Coleman, D., Tan, X., Lin, T., Hou, M., Zhang, F., McQuarrie, F., & Edwards, C. (In press). The Rational Selection of Goal Operations and the Integration of Search Strategies with Goal-Driven Marine Autonomy. In the Ninth Annual Conference on Advances in Cognitive Systems. Cognitive Systems Foundation.
[ACS 2021] Cox, M. T., Mohammad, Z., Kondrakunta, S., Gogineni, V. R., Dannenhauer, D., & Larue, O. (In press). Computational Metacognition. In the Ninth Annual Conference on Advances in Cognitive Systems. Cognitive Systems Foundation.
[ACC 2021] Kondrakunta, S., & Gogineni, V. R., Molineaux, M., & Cox, M. T. (In press). Problem recognition, explanation and goal formulation. In the Fifth International Conference on Applied Cognitive Computing (ACC). Springer.
[ACC 2021] Kondrakunta, S., & Cox, M. T. (In press). Autonomous Goal Selection Operations for Agent-Based Architectures. In the Fifth International Conference on Applied Cognitive Computing (ACC). Springer.
[FLAIRS 2020] Gogineni, V., Kondrakunta, S., Molineaux, M., & Cox, M. T. (2020, May). Case-Based Explanations and Goal Specific Resource Estimations. In the Thirty-Third Florida Artificial Intelligence Research Society Conference, North America (pp.407-412). AAAI Press.
[MIDCA Workshop 2019] Dannenhauer, D., Schmitz, S., Eyorokon, V., Gogineni, V. R., Kondrakunta, S., Williams, T., & Cox, M. T. (2019). MIDCA Version 1.4: User manual and tutorial for the Metacognitive Integrated Dual-Cycle Architecture (Tech. Rep. No. COLAB2-TR-3). Dayton, OH: Wright State University, Collaboration and Cognition Laboratory.
[ACS 2019] Kondrakunta, S., Gogineni, V. R., Brown, D., Molineaux, M., & Cox, M. T. (2019). Problem recognition, explanation and goal formulation. In Proceedings of the Seventh Annual Conference on Advances in Cognitive Systems (pp. 437-452). Cognitive Systems Foundation.
[ICCBR 2019] Gogineni, V. R.,Kondrakunta, S., Brown, D., Molineaux, M., & Cox, M. T. (2019, September). Probabilistic Selection of Case-Based Explanations in an Underwater Mine Clearance Domain. In International Conference on Case-Based Reasoning (pp. 110-124). Springer, Cham.
[GRW: IJCAI 2018] Kondrakunta, S., Gogineni, V. R., Molineaux, M., Munoz-Avila, H., Oxenham, M., & Cox, M. T. (2018). Toward problem recognition, explanation and goal formulation. In Working Notes of the 2018 IJCAI/FAIM Goal Reasoning Workshop, Stockholm, Sweden. IJCAI
[XCBR: ICCBR 2018] Gogineni, V., Kondrakunta, S., Molineaux, M., & Cox, M. T. (2018). Application of case-based explanations to formulate goals in an unpredictable mine clearance domain. In Proceedings of the ICCBR-2018 Workshop on Case-Based Reasoning for the Explanation of Intelligent Systems, Stockholm, Sweden (pp. 42-51). Springer, Cham.
[GRW: IJCAI 2017] Dannenhauer, D., Munoz-Avila, H., & Kondrakunta, S. Goal-Driven Autonomy Agents with Sensing Costs. In Working Notes of the 2017 IJCAI Goal Reasoning Workshop, Melbourne, Australia. IJCAI.
[GRW: IJCAI 2017] Kondrakunta, S., & Cox, M. T. (2017, July). Autonomous goal selection operations for agent-based architectures. In Working Notes of the 2017 IJCAI Goal Reasoning Workshop, Melbourne, Australia. IJCAI.
[AAAI 2017] Cox, M., Dannenhauer, D., & Kondrakunta, S. (2017, February). Goal operations for cognitive systems. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 31, No. 1). AAAI Press.
[IEEE 2015] Kishore, P. V. V., Rahul, R., Kondrakunta, S., & Sastry, A. S. C. S. (2015, August). Crowd density analysis and tracking. In 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI) (pp. 1209-1213). IEEE.
[Doctoral Dissertation 2021] Kondrakunta, S. (2021, December). Complex Interactions between Multiple Goal Operations in Agent Goal Management Wright State University, Doctoral dissertation.
[Master's Dissertation 2017] Kondrakunta, S. (2017, August). Implementation and Evaluation of Goal Selection in a Cognitive Architecture. Browse all Theses and Dissertations. 1811.
ACS 2022 Tenth Annual Conference on Advances in Cognitive Systems. Arlington, Virginia, USA.
GRW 2021 Ninth Goal Reasoning Workshop. Virtual, USA.
CSCE 2021 American Council on Science and Education. Las Vegas, Nevada, USA.
ACS 2021 Ninth Annual Conference on Advances in Cognitive Systems. Virtual, USA.
ACS 2020 Eighth Annual Conference on Advances in Cognitive Systems. Palo Alto, California, USA.
ACS 2019 Seventh Annual Conference on Advances in Cognitive Systems. Massachusetts Instituteof Technology, Massachusetts, USA. Poster presentation on problem recognition.
MIDCA 2018 Second Annual MIDCA Workshop. Wright State University, Ohio, USA. Oral presentation on goaloperations in cognitive architecture.
AAMAS 2018 The 17th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-2018). Stockholmsmässan, Stockholm, Sweden.
ICML 2018 Thirty-fifth International Conference on Machine Learning. Stockholmsmässan, Stock-holm, Sweden.
ECAI 2018 The 23rd European Conference on Artificial Intelligence. Stockholmsmässan, Stockholm, Sweden.
IJCAI 2018 The 27th International Joint Conference on Artificial Intelligence. Stockholmsmässan, Stockholm, Sweden.
ICCBR 2018 The 26th International Conference on Case-Based Reasoning. Stockholmsmässan, Stockholm, Sweden.
GRW: IJCAI 2018 The 6th Goal Reasoning Workshop. Stockholmsmässan, Stockholm, Sweden. Oral presentation ongoal selection operation.
MIDCA 2017 First Annual MIDCA Workshop. Wright State University, Ohio, USA. Oral presentation on MIDCA Architecture.
COMTOR DerbyHacks 3, University of Louisville, KY.
HACK-STATA Hack-CWRU, Case Western Reserve University, OH.
YOUR VIRTUAL DOCTOR SpartahackIV, Michigan State University, MI.
ACS 2022 Session Chair and Sub-reviewer at the tenth annual Conference on Advances in Cognitive Systems.
GRW 2021 Chair for the 9th Goal Reasoning workshop(GRW) held at the 9th Conference on Advances in Cognitive Systems.
INTEX 2021 PC Member for Integrated Execution (IntEx) / Goal Reasoning (GR) workshop held at the 31stInternational Conference on Automated Planning and Scheduling
INTEX 2020 PC Member for Integrated Execution (IntEx) / Goal Reasoning (GR) workshop held at the 30thInternational Conference on Automated Planning and Scheduling
ECAI 2020 Sub-reviewer for 24th European Conference on Artificial Intelligence
MITW 2019 Organized annual Make-IT-Wright Hackathon at Wright State University to encourage undergraduatestudent to code