Update from RoboCup 2018 – Montreal CA

My son’s team had qualified for the rescue line competition at the international RoboCup 2018 competition in Montreal Canada. While his team did not do as well (26 out of 38 teams) – it was a great learning experience. Since it was our first international competition, we were also awed by the level of the competition as well as the advanced robotics capabilities on display.

Here’s a description of some of the advances we saw at the conference.

Tickets:

We got tickets early in the morning on Sunday June 17th, and the rest of the day went in team setup, practice runs and calibration & tuning the robotics programs.

Setup:

Sunday, June 17 was for setup at the Palais De Congres in Montreal.

Rescue Line Competition:

The competition primarily consisted of teams building their own bots to navigate a variety of challenges like line tracing, navigating left, right and U turns, navigating road bumps, debris and obstacles, tackling ramps, bridges etc. In addition being able to score points in the rescue room by identifying victims and carrying them to the respective rescue zones. All of this was to be available as capabilities of the bot and to use sensors to autonomously navigate through the obstacle course and accomplish the above mentioned tasks.

Rescue Line Winning Team – Iranian Kawash. The following is a video of their winning robot in the rescue zone.

Sid’s Team run –

Winning Robot: Team Kavosh from Iran

Rescue Maze Competition:

The objective of the rescue maze competition is for a robot to navigate a complex maze and identify victims to save.

Team Soccer:

Autonomous bots in a team playing soccer. The soccer tournament came a various levels –

Middle Size League:

Powerful bots playing soccer with a full size soccer ball. Here’s a video of a goal from one of the teams.

Junior League:

A very fast paced soccer game with bots acting in coordination as one team against an opposing team. Here’s a video that shows how exciting this can be.

Humanoid Robot Soccer:

There were three categories – Kidsize Humanoid League, Humanoid League

and Standard Platform League. There were mainly two challenges – building a humanoid robot to navigate locomotion challenges of walking while controlling and navigating a ball towards the opponents goal. The Standard Platform (for e.g. using Softbank’s NAO Generation 4) to play as a team the full soccer match. These looked like toddlers who were navigating the challenges of walking, coordinating, sensing and controlling the ball and  shooting goals. Overall it was great fun to watch! Do check out the videos below.

 

Industrial Robots and Others at Work and Home settings:

We saw a number of demonstrations from a number of different companies on industrial robotics. A few are described in the following videos.

There were a number of other home setting challenges as well – for example unloading grocery bags and storing them in the right location/shelf in a home.

Rescue Robot League:

Navigating difficult terrain robots: These are primarily demonstrations of how bots can be used in a hazardous situation for rescue. Challenges included navigating difficult terrains, opening doors, accomplishing tasks such as sensor readings, mapping a maze or path through the field etc. Some videos listed here are very impressive. These teams were mainly university and research lab teams. Do check out the following links.

 

 

Sponsors:

Softbank was a big sponsor at the event.

It has made huge investments into robotics. It will be evident from the following videos that making these investments is critical to succeeding in the near future when we expect a lot of the mundane work to be automated and mechanized.

 

Closing thoughts:

We saw that we were competing against a number of national teams. There is a huge difference in terms of resources and motivation from state sponsorship as was evident with the Iranian, Chinese, Russian, Singaporean, Croatian, Egyptian and Portuguese delegations.

 

 

 

 

 

 

 

A second learning was that at this level you need a stable platform and cannot afford to rebuild your bot for every run. Hopefully my son’s team is taking this feedback to heart and coming back to the competition stronger next year in Australia!

The kids had fun – here’s the team at the Notre-Dame Basilica of Montreal

How To Determine Which Machine Learning Technique Is Right For You?

Machine Learning is a vast field with various techniques available to a practitioner. This blog is about how to navigate this space and apply the right methods for your problem.

What is Machine Learning?

Tom Mitchel provides a very apt definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.”

E = the experience of playing many games.

T = the task of playing an individual game.

P = the probability that the program will win the next game.

For example a machine playing Go was able to beat the world’s best Go player. Earlier machines were dependent on humans to provide the example learning set. But in this instant, the machine was able to play against itself, and learn the basic Go techniques.

Broad classification of Machine Learning techniques are:

Supervised Learning: A set of problems where there is a relationship between input and output; Given a data set where we already know the correct output,  we can train a machine to derive this relationship and use this model to predict outcomes for previously unknown data points. These are broadly classified under “regression” and “classification” problems.

  • Regression: When we try to predict results within a continuous output meaning we try to map input variables to some continuous function.  For e.g. given the picture of a person, predicting the age of the person.
    1. Gradient Descent – or steepest descent is an optimization technique to follow the largest derivative to get to a local or global minima. This technique is often used in machine learning applications to calculate the coefficients in regression curve fitting over a training data set. Using these curve fitting coefficients, the program can then make  predictions on a continuous valued output for any new datasets presented to it.
    2. Normal Equation –  (\[\theta=(X^TX)^{-1}X^Ty\]) Refers to a set of simultaneous equations involving experimental unknowns and derived from a large number of observation equations using least squares adjustments.
    3. Neural Networks: Refers to a system of connected nodes that mimic our brains (biological neural networks). Such systems learn the model coefficients by observing real life data and once tuned can be used in output predictions for unseen data or observations outside the training set.  
  • Classification: When we try to predict results in a discrete output i.e. map input variables into discrete categories.  For e.g. given a patient with tumor, predicting whether its benign or malignant. Types of classification algorithms: 
    1. Large Margin Classification
    2. Kernels
    3. Support Vector Machines

 

Unsupervised Learning: When we derive the structure by clustering the data based on relationships among the variables in the data. With unsupervised learning there is no feedback based on the prediction results.

 

  • Clustering: Its the process of dividing a set of input data into possibly overlapping, subsets, where elements of each subset are considered related by some similarity measure. Take a collection of data, and find a way to automatically group this data that are similar or related by different variables. For e.g. the clustering of news on the google news home page.

Some classic graph clustering algorithms are the following:

  1. Kernel K-means : Select k data points from i/p as centroids, assign data points to nearest centroid; recompute centroid for each cluster till centroids do not change.
  2. K-spanning tree: Obtain the minimum spacing tree (MST) of an input graph; removing k-1 edges from the MST results in k clusters.
  3. Shared nearest neighbor: Obtain the shared nearest neighbor (SNN) graph the input graph; removing edges from the SNN with weight less than τ results in groups of non overlapping vertices. 
  4. Betweenness centrality based: quantifies the degree to which a vertex (or edge) occurs on the shortest path between all other pairs of nodes.  
  5. Highly connected components: the minimum set of edges whose removal disconnects a graph to produce a highly connected subgraph (HCS). 
  6.  Maximal clique enumeration : A subgraph C of graph G with edges between all pairs of nodes; Maximal clique is a clique not part of the larger clique; 

 

  • Non-Clustering: Allows you to find structure in a chaotic environment.
    1. Reinforced Learning: where software agents automatically determine ideal behavior to maximize performance.
    2. Recommender Systems: Is an information filtering system that seeks to predict the preference for an item from a user’s perspective by watching and learning the user’s behavior.
    3. Natural Language Processing: Is a field that deals with machine interaction with human languages. Specifically manages the following 3 challenges: speech recognition, understanding and response generation.

 

And finally, remember the 7 essential steps in accomplishing your machine learning project are the following:

  • Gathering the data
  • Preparing the data
  • Choosing a Model
  • Training your Model
  • Evaluating your Model parameters
  • Hyperparameter training
  • And finally prediction