Stanford’s Machine Learning Course @ Coursera

Stanford’s Machine Learning course, taught by Dr. Andrew Ng was one of the courses that started the MOOC enthusiasm, and having now completed it, I can see why. I found it fascinating, mostly just the right of challenge, and a class I’ve gotten a lot out of. 

Machine Learning is basically having the computer figure our part of how to solve the problem, rather than explicitly programming in all the parameters. So, for example, you can feed parameters about items into the computer and tell it to group the items into a specified number of clusters and using, for example, k-means clustering, it will find the way to group them into the most distinct groups. Or you can use a neural network to identify hand-written digits by feeding the network training examples and the correct results, without ever trying to explicitly program how to distinguish a 1 from a 7 from a 4. 

Example classification results using logistic regression in Octave

Example classification results using logistic regression in Octave

The course is focused on machine learning algorithms, and seems to cover most of the basics, with the exception of ensemble methods. It is not a class in big data, but it is these same analytic approaches, adopted to handle large data sets, that are used in Big Data applications. Part of one of the last week’s videos provides a bit of an introduction to the issue that have to be addressed when applying the techniques to big data.

The class, like other MOOC’s I’ve taken, is light on theory to cut down on the difficulty and time required for the class. However this class provided some theory without proofs so that you gained an understanding of the topic and it wasn’t just a cookbook.  At the same time, it covered practical recommendations and guidance for putting what you learned in to use.  I really liked the balance. There are video lectures, typically with one or two questions in each video that aren’t scored, they just break up watching the video and provide a self-check that you are following the material. Then there are weekly homework problems typically 5, with some having multiple parts. They are multiple choice with most having multiple answers to provide (e.g., “check which of the following statements are true.”). In addition, there are hands-on programming assignments each week using Octave, which is a free programming language.  Octave has almost identical syntax to Matlab and you can use Matlab as an alternative. Octave uses a command line interface, however, rather than Matlab’s notebook approach.

With one exception, the assignments weren’t too hard, but took work.  The neural networks assignment about half-way through the course really took a lot of work to complete. I was worried that the assignments might get progressively harder, but they didn’t. 

The topics covered in the course are:

  1. Introduction to Machine Learning. Univariate linear regression. (Optional: Linear algebra review.)
  2. Multivariate linear regression. Practical aspects of implementation. Octave tutorial.
  3. Logistic regression, One-vs-all classification, Regularization.
  4. Neural Networks.
  5. Practical advice for applying learning algorithms: How to develop, debugging, feature/model design, setting up experiment structure.
  6. Support Vector Machines (SVMs) and the intuition behind them.
  7. Unsupervised learning: clustering and dimensionality reduction.
  8. Anomaly detection.
  9. Recommender systems.
  10. Large-scale machine learning. An example of an application of machine learning.

If you’ve any interest in the topic and are looking to learn, I highly recommend this course. It’s inspired me to continue to learn through the machine learning challenges at Kaggle. I’ve switched over from using Octave to the Scikit-learn package in python. All I can say about Scikit-learn in this article is WOW! What a powerful, convenient, and, especially given that’s it’s an open source project, amazingly well-documented. I’ll have more about Kaggle and Scikit-learn in a later post.

My First Maker Faire

My wife and son and I went up to New York City last weekend to attend the World Maker Faire.  It was our first.  It was very impressive to see all the makers. A journalist described Maker Faire as being a family-friendly combination of the Consumer Electronics Show and Burning Man.  I’ve never been to either, but from what I’ve read, that sounds like it captures some of the flavor. The CES-like portion, however, is more equipment for Makers, not finished electronics products.  The exhibits and booths range from high school robotics teams to corporate exhibits (e.g., Arduino), with a very broad range, from textiles to Purina’s DIYCat tent..

Eepy Bird getting ready for their next Coke and Mentos fountain show

Eepy Bird getting ready for their next Coke and Mentos fountain show

Click the photo to view a small photo set from the Faire. Adafruit has a much larger gallery of pictures posted here.

In addition to the exhibits, there are learning areas, including Learn to Solder, lockpicking classes, and build an air-powered rocket for kids, as well as short 20 minute or so talks at multiple stages throughout both days, and some large exhibits such as the human-sized mousetrap show and Eepy-Bird’s Coke and Mentos fountains extravaganza.

The presentations I went to included Multirotors 101, Shrinking the Size of Your Arduino Projects, Hacking the Unhackable: How We Can Make the Entire Word Interactive, and Getting Started with the Arduino YUN.  My son attended talks including ones on the Rasberry Pi and on how the Maker Faire is, in its own way, continuing the traditions of Worlds Fares of old.  The Shrinking the Size of Your Arduino project was about a product I hadn’t seen before, the TinyDuino.  This is an Arduino-compatible board the size of a quarter, WITH it’s own set of shields!  It’s the latter that really distinguishes it from other tiny Arduino compabitle boards.

It was definitly a fun weekend, with stuff for the whole family.  I’m very glad we drove up to New York and spent the weekend. Now having attended the World Maker Faire, I don’t feel the need to make it a must-do annual trip, but I definitely want to return repeatedly . We went for a day and a half, but for the future, will probably make it just a one day event.

Field Report: Riding in a Self-Driving Car

Last month, as part of my work, I got a chance to attend TRB’s 2nd Annual Workshop on Vehicle Automation, held at Stanford University. It had a lot of interesting presentations and discussions, and almost all of the material is available at the above website. As part of the workshop, they had several demonstration vehicles, including one of Google’s cars, which my colleague got a chance to ride in, and a very similar vehicle from Bosch, which I got a chance to ride in.

Bosch self-driving car

After the demo ride, safe and sound.

The Bosch vehicle is very similar to everything I’ve seen and heard about the more well-known Google vehicles. It has a number of both forward, rear, and side looking radars, as well as the LIDAR on the roof.  The LIDAR and very accurate GPS are very expensive sensors, and not expected to drop to what’s needed for production vehicles.  Bosch’s research plan is to transition to a more cost-effective sensor suite over the next several years. It was fascinating to watch the real-time display of what the LIDAR and radars were seeing as we drove. One thing I found interesting is that the vehicle was often able to “see” several cars ahead. Here’s a close up of the LIDAR system:

LIDAR sensor on roof of Boach automated vehicle

The LIDAR sensor

For the demo, the human driver drove the vehicle out onto the freeway and then engaged the automation features.  The vehicle then steered itself, staying within the lane, and kept it’s speed. When a slower vehicle pulled in front, the vehicle automatically checked the lane to the left and then switched to the left lane in order to maintain the desired set speed.  VERY impressive!

A couple of notes: at one point the vehicle oscillated very slightly within the lane, all the while staying well within the lane, sort of what a new driver might sometimes do. I thought it might be the tuning in the control algorithm and asked about it, but the researcher believed it was actually a slight wobble in the prescribed path on the electronic map, although he was going to have to look at the details after the conference to confirm this. Also, when a car pulled in front of us with a rather short separation distance, the vehicle braked harder than it probably needed to, which IS just a matter of getting the tuning right. Other than the hard braking, it felt very comfortable and normal.

This was actually my third demo ride in an automated vehicle. The first was in Demo ’97, as part of the Automated Highway System program. That was very impressive for it’s time, but the demo took place on a closed off roadway, rather than in full normal traffic on an open public freeway, like the Bosch demo.In addition, the vehicle control systems and sensors were far less robust, relying on permanent magnets in the roadway for navigation. Even then, there was work going on with vision systems, but the computing power wasn’t quite there yet. In 2000, I rode in a university research vehicle that used vision systems around a test track at the Intelligent Transport Systems World Congress in Turin, Italy. That system, while it used vision rather than magnets, was, while again a great step forward, far from robust. Today’s systems, if they can get the cost down, seem well on the path to commercial sale.

While Google executives have talked about vehicles with limited self-driving being sold before 2020, most other companies were talking about the mid 2020’s. This isn’t for a vehicle that can totally drive itself anywhere, which is the long-term dream, but rather for a vehicle that can often drive itself and can totally take over for long stretches of the roadway.  The National Highway Traffic and Safety Administration (NHTSA) has a very useful five-level taxonomy for levels of automation:NHTSA defines vehicle automation as having five levels:

  • No-Automation (Level 0): The driver is in complete and sole control of the primary vehicle controls – brake, steering, throttle, and motive power – at all times.
  • Function-specific Automation (Level 1): Automation at this level involves one or more specific control functions. Examples include electronic stability control or pre-charged brakes, where the vehicle automatically assists with braking to enable the driver to regain control of the vehicle or stop faster than possible by acting alone.
  • Combined Function Automation (Level 2): This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.
  • Limited Self-Driving Automation (Level 3): Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The Google car is an example of limited self-driving automation.
  • Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles.

Level 2 systems have already been announced as coming into production by several automakers within the next 5 years. Level 3 by the mid 2020’s is the stated goal of several companies. Full automation (the truly autonomous vehicle with no driver required) is still the stuff of science fiction, but where a lot of really interesting effects on society develop.

Here’s a short 3 minute Bosch video on their vehicle and their research: