E-PashuHaat Transportal

GPMS TRANSPORTAL APPLIED AI COURSES

AI

Course 60: Interaction AI Fundamentals

Duration: 36 Hours (6 Hours per week - 2 Hrs x 3)

Week 1 - What is Interaction AI
Learning Outcome
Students should be able to identify use cases for interaction AI, and describe its role in enhancing human-computer interaction.
1.1 Introduction to basic interaction use cases (chatbots, virtual assistants, etc.)
1.2 Overview of human-computer interaction theories and AI integration.
1.3 Expanding interaction capabilities to speech, vision, and gestures.

Practical Component
Practical demonstrations on working with various interaction AI systems, understanding their basic mechanisms, and experimenting with small-scale applications.
Week 2 - Interaction AI Technologies
Learning Outcome
Students will understand the technology stack that enables interaction AI and its working mechanisms.
2.1 Overview of natural language processing (NLP), speech recognition, and gesture recognition.
2.2 Understanding the integration of AI with real-time systems.
2.3 The role of machine learning in improving interaction intelligence.

Practical Component
Practical exercises in building a basic interaction system using NLP and speech recognition, experimenting with different AI models and settings.
Week 3 -Deep Dive into Natural Language Processing (NLP)
Learning Outcome
Students will learn how NLP is utilized in interaction AI, and how it can be enhanced to improve human interaction.
3.1 The basics of text analysis, tokenization, and sentiment analysis.
3.2 Advanced NLP techniques for question-answering, intent detection, and conversational AI.
3.3 Implementing NLP models for real-world interaction tasks. All Rights Reserved

Practical Component
Practical demonstrations and hands-on coding tasks using NLP libraries and tools to create chatbots or virtual assistants
Week 4 - Speech and Gesture Recognition
Learning Outcome
Students will gain an understanding of speech and gesture recognition systems and their applications in interactive AI.
4.1 Speech-to-text and speech-to-speech conversion systems.
4.2 Introduction to gesture recognition techniques.
4.3 Integrating voice and gesture with AI systems for seamless interaction. Practical Component

Practical Component
Building small projects that integrate speech recognition with gesture control to trigger specific actions within the AI system.
Week 5 - Multi-modal Interaction Systems
Learning Outcome
Students will understand the creation and functionality of multi-modal systems that combine speech, vision, and touch for improved user interaction.
5.1 Understanding multi-modal AI architectures.
5.2 Enhancing interaction by combining inputs from voice, text, and gestures.
5.3 Designing responsive and adaptive AI systems that can handle complex interactions.

Practical Component
Students will work on integrating multiple input forms (e.g., text and voice) in a single interaction model, and analyze the systems efficiency in various cont
Week 6 - Project and Assessment
Learning Outcome
Students will design and implement a basic interaction AI system of their choice, assess its usability, and present their project.
6.1 Selecting a real-world problem and designing a custom interaction solution.
6.2 Peer assessment and review of project outcomes.
6.3 Final presentations, feedback, and assessment of overall project quality.

Practical Component
Presenting the final interaction AI project, with live demonstrations, system improvements, and lessons learned during the process.