Musketeers: Autonomous pickup and delivery fleet

Topology-aware registration overview In this project, we develop a multi-robot(20) autonomous delivery framework, that works with husky robots from ClearPath robotics, in a simulation environment. The robots called "Musky" could be access delivery and pickup locations via command line inputs and/or with a Graphical User Interface (GUI) , where they can be made to traverse to a preset location to pick up an order from a restaurant and will deliver it to a preset delivery location. Our proposed Robotic delivery system consists of a fleet of husky robots parked at multiple base stations, Where each base station is provided with a unique Base ID. The purpose of each station is to charge robots and act as a fulfillment center to process the order. Upon receiving the orders from customers, Our advanced Task Planner algorithms take care of assigning delivery jobs to available Musketeers with sufficient charge to fulfill the order. Upon fulfillment of the order to the destination, The robot’s destination location becomes the new source location to process the new order. These operations are further continued and planned according to the task planner. We utilised ROS "move_base" framework for the navigation of the Musketeers to reach destination location.We have used ROS "gmapping" package to create the local and global cost maps along with detecting obstacles. We developed multiple interfaces in order to interact with the user using kivy for creating the GUI and CLI for command line input. This project is being developed using agile methodologies and test driven development. For more information about this project can be accessed at the github repository.

References:

Musketeers: Autonomous pickup and delivery fleet [ pdf] [ ppt][ github]
Rahul Karanam, Sumedh Koppuala,Pratik Acharya ENPM808X 2021

activity diagram

Modelling and Control : Robotics Projects focussed on ROS , Control Systems and Planning.

    Connected component extraction 1. KitchenBot:PickPack - Pick Pack which is a 6 axis UR5 robot which can assist restaurants by packaging their food products or helping out in the cooking process.The goal of this project is to design and simulate a six axis robot to assist food packaging and processing used in food industries such as restaurants.
  • Tech Stack: We used Move it for planning our UR5 robot to perform the pick and place operations. Validation of the forward and inverse kinematics was done using sympy and KDL (C++ Package).UR5 Robot Specifications: We have chosen UR5 ( Universal Robots ) robotic arm to perform pick-and place tasks. The base of the 6-axis UR5 robot is clamped on to the above mentioned robot base. The commands were given using Moveit group interface (C++) to the robot. Using the OMPL Planner we have planned the trajectory of the robot given the destination pose.PID controller was implemented to control the joints and visualization of the sensor data was done using RQT and Rviz. Please refer to the below report and github for more information.
  • References:

    KitchenBot:PickPack [ pdf] [ ppt][ github]
    Rahul Karanam, Sumedh Koppuala ENPM662 2021

    2. Inverse Kinematics Solver for 6 DOF manipulator: This API will be used for Solving Inverse Kinematics of robot manipulator. It is currently used for only manipulators having six degree of freedom. The input coordinates [X,Y,Z] are send as an input to the IK solver to get the output_joint_angles. We have created the FK solver in order to check for robot arm constraints are in the bias range or not. We use FK_solver in order to check for the output_bias and to cancel out singularities of the robot arm. This software will compute the trajectory when you input your desired location which will return a vector of all the joint angles for the robot. We simulate our trajectory using matplotlib for all the output coordinates covered till the desired location. This IK solver can be integrated with any six degree of robot manipulator.

    References:

    Inverse Kinematics Solver for 6 DOF manipulator [ pdf] [ UML][ github]
    Rahul Karanam, Sumedh Koppula , ENPM808X 2021

    3. Design and Simulate a LQR and LQG controller for double inverted pendulum: Implementation of LQR and LQG controller in Matlab for controlling a crane with two suspended pendulums.

    References:

    Design and Simulate a LQR and LQG controller for double inverted pendulum [ report][ github]
    Rahul Karanam, Pratik Acharya , ENPM667 2021

    4. Double Q-PID: Control of mobile robots using Double Q-PID algorithm.
  • This project is the implementation of the paper by Ignacio Carlucho on "Double Q-learning algorithm for mobile robot control". An expert agent- based system, based on a reinforcement learning agent, for self-adapting multiple low-level PID controllers in mobile robots. We are demonstrating our implementation by using husky and hector quadrotor. We have presented a technical report on this paper which you can find it below.
  • Non-rigid point cloud registration

    References:

    Double Q-PID algorithm for mobile robot control [ pdf] [ report][ github]
    Rahul Karanam, Sumedh Koppula , ENPM667 2021

Deep Learning

Text2Py: A transformer based model which generates python snippets given natural language ( English ) as input with proper white space indentations.

Generating Machine code from Human Languages is a challenging problem. In this project I have used Neural Transformer to generate Python Source Code from a given English Description. I have used a custom dataset where each record starts with English Description line starting with # character followed by python code. The dataset is available in link below:We develop a pipeline for creating different tokenizer for english and python's indentations. After the preprocessing of the dataset , it is then divided into question-answers pairs (English and Python code). We used a encoder-decoder architecture based transformer which uses a special type of attention mechanism called self-attention.The word embeddings of the input sequence are computed simultaneously along with positional encoding to get the positional information.The encoder takes the english words, create embeddings (pre-trained Glove embedding is used) and then pass through all the encoder transformer layers to get the encoded output which is passed to decoder.For Decoder, the standard architecture is modified to include Output Type as one of input. So, inputs used are [ Output Python Token , Output Python Token Type , Position] , the embedding are created for all these three and then passed through masked multihead attention and Layer normalization layers, which is then combined with encoder output and passed through the multihead attention layers and layer normalization layers followed by Feed Forward layer and then softmax is applied to get the final output.I have used Adam optimizer and cross-entropy loss to measure the performance.This is our best score ( BLEU ) after training for several epochs -| Test Loss: 0.125 | Test PPL: 1.133 |.

Source code:

Text2Py python code generator is available here.

Dataset:

A customized dataset of 4600+ examples of English Text to Python is available here.

References:

Text2Py: A transformer based model which generates python snippets given natural language as input [ blog] [ course]
Rahul Karanam END 2021


Non-rigid point cloud registration

Autonomous RC car using raspberry Pi3.

Topology-aware registration overviewA scaled down version of a self-driving car is being built using neural networks. • This system uses l293d for controlling the motors, Pi camera for taking photos and providing inputs to the neural network. • Trained the model using a supervised learning algorithm to learn the optimized steering angle with two hidden layers. • Ultrasonic sensors are used for obstacle avoidance; Raspberry pi is used as a server to take the inputs and predict the outcome after training through a neural network. • We use Opencv to optimize the weights by backpropagating the gradients. Once we get the optimized weights, we can apply it to the model then the car can run autonomously.

References:

Autonomous RC Car : A scaled down version of a self-driving car. [ photos]
Rahul Karanam CITD 2017
Non-rigid point cloud registration

Undergraduate Capstone : Design and Development of Unmanned Ground Vehicle for Surveillance

Spatial relations grounding A four-articulated legged UGV with 3 DOF Pan tilt camera used for surveillance in a remote area. The Vehicle has four articulated legs which can operate on any terrain, can climb stairs and a 3 DOF Pan tilt camera used for visual support and can carry payload using the robotic arm from a shorter distance. It is controlled through a wireless joystick which is attached with an in-built display for visual scope. The Project can be further developed by collaborating with drones for vision and path guidance to access inaccessible terrain and environments.

References:

Design and Development of Unmanned Ground Vehicle for Surveillance [ photos] [ demo]
Rahul Karanam, T.Muthuramalingam SRM 2018

Spatial relations tracking