Blog Archive

Friday, June 21, 2019

Humanising Autonomy pulls in $5M to help self-driving cars keep an eye on pedestrians

Pretty much everything about making a self-driving car is difficult, but among the most difficult parts is making sure the vehicles know what pedestrians are doing — and what they’re about to do. Humanising Autonomy specializes in this, and hopes to become a ubiquitous part of people-focused computer vision systems worldwide.

The company has raised a $5.3 million seed round from an international group of investors on the strength of its AI system, which it claims outperforms humans and works on images from practically any camera you might find in a car these days.

HA’s tech is a set of machine learning modules trained to identify different pedestrian behaviors — is this person about to step into the street? Are they paying attention? Have they made eye contact with the driver? Are they on the phone? Things like that.

The company credits the robustness of its models to two main things. First, the variety of its data sources.

“Since day one we collected data from any type of source — CCTV cameras, dash cams of all resolutions, but also autonomous vehicle sensors,” said co-founder and CEO Maya Pindeus. “We’ve also built data partnerships and collaborated with different institutions, so we’ve been able to build a robust data set across different cities with different camera types, different resolutions and so on. That’s really benefited the system, so it works in nighttime, rainy Michigan situations, etc.”

Notably their models rely only on RGB data, forgoing any depth information that might come from lidar, another common sensor type. But Pindeus said that type of data isn’t by any means incompatible, it just isn’t as plentiful or relevant as real-world, visual-light footage.

In particular, HA was careful to acquire and analyze footage of accidents, because these are especially informative cases of failure of AVs or human drivers to read pedestrian intentions, or vice versa.

The second advantage Pindeus claimed is the modular nature of the models the company has created. There isn’t one single “what is that pedestrian doing” model, but a set of them that can be individually selected and tuned according to the autonomous agent’s or hardware’s needs.

For instance, if you want to know if someone is distracted as they’re crossing the street. There’s a lot of things that we do as humans to tell if someone is distracted,” she said. “We have all these different modules that kind of come together to predict whether someone’s distracted, at risk, etc. This allows us to tune it to different environments, for instance London and Tokyo — people behave differently in different environments.”

“The other thing is processing requirements; Autonomous vehicles have a very strong GPU requirement,” she continued. “But because we build in these modules, we can adapt it to different processing requirements. Our software will run on a standard GPU when we integrate with level 4 or 5 vehicles, but then we work with aftermarket, retrofitting applications that don’t have as much power available, but the models still work with that. So we can also work across levels of automation.”

The idea is that it makes little sense to aim only for the top levels of autonomy when really there are almost no such cars on the road, and mass deployment may not happen for years. In the meantime, however, there are plenty of opportunities in the sensing stack for a system that can simply tell the driver that there’s a danger behind the car, or activate automatic emergency braking a second earlier than existing systems.

While there are lots of papers published about detecting pedestrian behavior or predicting what a person in an image is going to do, there are few companies working specifically on that task. A full-stack sensing company focusing on lidar and RGB cameras needs to complete dozens or hundreds of tasks, depending on how you define them: object characterizations and tracking, watching for signs, monitoring nearby and distant cars and so on. It may be simpler for them and for manufacturers to license HA’s functioning and highly specific solution rather than build their own or rely on more generalized object tracking.

“There are also opportunities adjacent to autonomous vehicles,” pointed out Pindeus. Warehouses and manufacturing facilities use robots and other autonomous machines that would work better if they knew what workers around them were doing. Here the modular nature of the HA system works in its favor again — retraining only the parts that need to be retrained is a smaller task than building a new system from scratch.

Currently the company is working with mobility providers in Europe, the U.S. and Japan, including Daimler Mercedes Benz and Airbus. It’s got a few case studies in the works to show how its system can help in a variety of situations, from warning vehicles and pedestrians about each other at popular pedestrian crossings to improving path planning by autonomous vehicles on the road. The system can also look over reams of past footage and produce risk assessments of an area or time of day given the number and behaviors of pedestrians there.

The $5 million seed round, led by Anthemis, with Japan’s Global Brain, Germany’s Amplifier and SV’s Synapse Partners, will mostly be dedicated to commercializing the product, Pindeus said.

“The tech is ready, now it’s about getting it into as many stacks as possible, and strengthening those tier 1 relationships,” she said.

Obviously it’s a rich field to enter, but still quite a new one. The tech may be ready to deploy, but the industry won’t stand still, so you can be sure that Humanising Autonomy will move with it.

Source. TechCrunch, Devin Coldewey, June 19, 2019

***

This post was brought to you by Woewoda Communications, your partner in the venture capital, private equity and startup markets; offering strategic communications, public relations & investor relation services to Canadian VCs, PEs, Angels, Endowments/Trusts, Family Offices, and Canadian startups involved in ICT, IoT, blockchain, life sciences, healthcare, agribusiness, clean energy, fintech, AI and robotics.

Are you a Canadian GP/LP/CI or a Canadian startup that needs to grow or scale? Give us a call! One of our representatives would love to explain how we vertically design, and then systematically layer each of our communication platforms to effectively reach niche target audiences for our clients. WC offers a unique synergistic approach to effectively communicate our client's message to their target audience.

Serving Vancouver, Montreal, Toronto, Waterloo, Ottawa and Halifax.


No comments:

Post a Comment

Small Business Finance Presentation: Creating Your Money Map

  Small Business Finance Presentation Creating Your Money Map  Title  Small Business Finances - Creating your Money Map Descriptio...