Archives

Software

Announcing the final agenda for Robotics + AI — March 3 at UC Berkeley

TechCrunch is returning to U.C. Berkeley on March 3 to bring together some of the most influential minds in robotics and artificial intelligence. Each year we strive to bring together a cross-section of big companies and exciting new startups, along with top researchers, VCs and thinkers.

In addition to a main stage that includes the likes of Amazon’s Tye Brady, U .C. Berkeley’s Stuart Russell, Anca Dragan of Waymo, Claire Delaunay of NVIDIA, James Kuffner of Toyota’s TRI-AD, and a surprise interview with Disney Imagineers, we’ll also be offering a more intimate Q&A stage featuring speakers from SoftBank Robotics, Samsung, Sony’s Innovation Fund, Qualcomm, NVIDIA and more.

Alongside a selection of handpicked demos, we’ll also be showcasing the winners from our first-ever pitch-off competition for early-stage robotics companies. You won’t get a better look at exciting new robotics technologies than that. Tickets for the event are still available. We’ll see you in a couple of weeks at Zellerbach Hall.

Agenda

8:30 AM – 4:00 PM

Registration Open Hours

General Attendees can pick up their badges starting at 8:30 am at Lower Sproul Plaza located in front of Zellerbach Hall. We close registration at 4:00 pm.

10:00 AM – 10:05 AM

Welcome and Introduction from Matthew Panzarino (TechCrunch) and Randy Katz (UC Berkeley)

10:05 AM – 10:25 AM

Saving Humanity from AI with Stuart Russell (UC Berkeley)

The UC Berkeley professor and AI authority argues in his acclaimed new book, “Human Compatible,” that AI will doom humanity unless technologists fundamentally reform how they build AI algorithms.

10:25 AM – 10:45 AM

Engineering for the Red Planet with Lucy Condakchian (Maxar Technologies)

Maxar Technologies has been involved with U.S. space efforts for decades, and is about to send its sixth (!) robotic arm to Mars aboard NASA’s Mars 2020 rover. Lucy Condakchian is general manager of robotics at Maxar and will speak to the difficulty and exhilaration of designing robotics for use in the harsh environments of space and other planets.

10:45 AM – 11:05 AM

Automating Amazon with Tye Brady (Amazon Robotics)

Amazon Robotics’ chief technology officer will discuss how the company is using the latest in robotics and AI to optimize its massive logistics. He’ll also discuss the future of warehouse automation and how humans and robots share a work space. 

11:05 AM – 11:15 AM

Live Demo from the Stanford Robotics Club 

11:30 AM – 12:00 PM

Book signing with Stuart Russell (UC Berkeley)

Join one of the foremost experts in artificial intelligence as he signs copies of his acclaimed new book, Human Compatible.

11:35 AM – 12:05 PM

Building the Robots that Build with Daniel Blank (Toggle Industries), Tessa Lau (Dusty Robotics), Noah Ready-Campbell (Built Robotics) and Brian Ringley (Boston Dynamics)

Can robots help us build structures faster, smarter and cheaper? Built Robotics makes a self-driving excavator. Toggle is developing a new fabrication of rebar for reinforced concrete, Dusty builds robot-powered tools and longtime robotics pioneers Boston Dynamics have recently joined the construction space. We’ll talk with the founders and experts from these companies to learn how and when robots will become a part of the construction crew.

12:15 PM – 1:00 PM

Q&A: Corporate VC, Partnering and Acquisitions with Kass Dawson (SoftBank Robotics America), Carlos Kokron (Qualcomm Ventures), and Gen Tsuchikawa (Sony Innovation Fund)

Join this interactive Q&A session on the breakout stage with three of the top minds in corporate VC.

1:00 PM – 1:25 PM

Pitch-off 

Select, early-stage companies, hand-picked by TechCrunch editors, will take the stage and have five minutes to present their wares.

1:15 PM – 2:00 PM

Q&A: Founding Robotics Companies with Sebastien Boyer (FarmWise) and Noah Ready-Campbell (Built Robotics)

Your chance to ask questions of some of the most successful robotics founders on our stage

Investing in Robotics and AI: Lessons from the Industry’s VCs with Dror Berman (Innovation Endeavors), Kelly Chen (DCVC) and Eric Migicovsky (Y Combinator)

Leading investors will discuss the rising tide of venture capital funding in robotics and AI. The investors bring a combination of early-stage investing and corporate venture capital expertise, sharing a fondness for the wild world of robotics and AI investing.

1:50 PM – 2:15 PM

Facilitating Human-Robot Interaction with Mike Dooley (Labrador Systems) and Clara Vu (Veo Robotics)

As robots become an ever more meaningful part of our lives, interactions with humans are increasingly inevitable. These experts will discuss the broad implications of HRI in the workplace and home.

2:15 PM – 2:40 PM

Toward a Driverless Future with Anca Dragan (UC Berkeley/Waymo), Jinnah Hosein (Aurora) and Jur van den Berg (Ike)

Autonomous driving is set to be one of the biggest categories for robotics and AI. But there are plenty of roadblocks standing in its way. Experts will discuss how we get there from here. 

2:15 PM – 3:00 PM

Q&A: Investing in Robotics Startups with Rob Coneybeer (Shasta Ventures), Jocelyn Goldfein (Zetta Venture Partners) and Aaron Jacobson (New Enterprise Associates)

Join this interactive Q&A session on the breakout stage with some of the greatest investors in robotics and AI

2:40 PM – 3:10 PM

Disney Robotics

Imagineers from Disney will present start of the art robotics built to populate its theme parks.

3:10 PM – 3:35 PM

Bringing Robots to Life with Max Bajracharya and James Kuffner (Toyota Research Institute Advanced Development)

This summer’s Tokyo Olympics will be a huge proving ground for Toyota’s TRI-AD. Executive James Kuffner and Max Bajracharya will join us to discuss the department’s plans for assistive robots and self-driving cars.

3:15 PM – 4:00 PM

Q&A: Building Robotics Platforms with Claire Delaunay (NVIDIA) and Steve Macenski (Samsung Research America)

Join this interactive Q&A session on the breakout stage with some of the greatest engineers in robotics and AI.

3:35 PM – 4:00 PM

The Next Century of Robo-Exoticism with Abigail De Kosnik (UC Berkeley), David Ewing Duncan, Ken Goldberg (UC Berkeley), and Mark Pauline (Survival Research Labs)

In 1920, Karl Capek coined the term “robot” in a play about mechanical workers organizing a rebellion to defeat their human overlords. One hundred years later, in the context of increasing inequality and xenophobia, the panelists will discuss cultural views of robots in the context of “Robo-Exoticism,” which exaggerates both negative and positive attributes and reinforces old fears, fantasies and stereotypes.

4:00 PM – 4:10 PM 

Live Demo from Somatic

4:10 PM – 4:35 PM

Opening the Black Box with Explainable AI with Trevor Darrell (UC Berkeley), Krishna Gade (Fiddler Labs) and Karen Myers (SRI International)

Machine learning and AI models can be found in nearly every aspect of society today, but their inner workings are often as much a mystery to their creators as to those who use them. UC Berkeley’s Trevor Darrell, Krishna Gade of Fiddler Labs and Karen Myers from SRI will discuss what we’re doing about it and what still needs to be done.

4:35 PM – 5:00 PM 

Cultivating Intelligence in Agricultural Robots with Lewis Anderson (Traptic), Sebastian Boyer (FarmWise) and Michael Norcia (Pyka)

The benefits of robotics in agriculture are undeniable, yet at the same time only getting started. Lewis Anderson (Traptic) and Sebastien Boyer (FarmWise) will compare notes on the rigors of developing industrial-grade robots that both pick crops and weed fields respectively, and Pyka’s Michael Norcia will discuss taking flight over those fields with an autonomous crop-spraying drone.

5:00 PM – 5:25 PM

Fostering the Next Generation of Robotics Startups with Claire Delaunay (NVIDIA), Scott Phoenix (Vicarious) and Joshua Wilson (Freedom Robotics

Robotics and AI are the future of many or most industries, but the barrier of entry is still difficult to surmount for many startups. Speakers will discuss the challenges of serving robotics startups and companies that require robotics labor, from bootstrapped startups to large scale enterprises.

5:30 PM – 7:30 PM

Unofficial After Party, (Cash Bar Only) 

Come hang out at the unofficial After Party at Tap Haus, 2518 Durant Ave, Ste C, Berkeley

Final Tickets Available

We only have so much space in Zellerbach Hall and tickets are selling out fast. Grab your General Admission Ticket right now for $350 and save 50 bucks as prices go up at the door.

Student tickets are just $50 and can be purchased here. Student tickets are limited.

Startup Exhibitor Packages are sold out!


Google launches the first developer preview of Android 11

With the days of desert-themed releases officially behind it, Google today announced the first developer preview of Android 11, which is now available as system images for Google’s own Pixel devices, starting with the Pixel 2.

As of now, there is no way to install the updates over the air. That’s usually something the company makes available at a later stage. These first releases aren’t meant for regular users anyway. Instead, they are a way for developers to test their applications and get a head start on making use of the latest features in the operating system.

With Android 11 we’re keeping our focus on helping users take advantage of the latest innovations, while continuing to keep privacy and security a top priority,” writes Google VP of Engineering Dave Burke. “We’ve added multiple new features to help users manage access to sensitive data and files, and we’ve hardened critical areas of the platform to keep the OS resilient and secure. For developers, Android 11 has a ton of new capabilities for your apps, like enhancements for foldables and 5G, call-screening APIs, new media and camera capabilities, machine learning, and more.”

Unlike some of Google’s previous early previews, this first version of Android 11 does actually bring quite a few new features to the table. As Burke noted, there are some obligatory 5G features like a new bandwidth estimate API, for example, as well as a new API that checks whether a connection is unmetered so apps can play higher resolution video, for example.

With Android 11, Google is also expanding its Project Mainline lineup of updatable modules from 10 to 22. With this, Google is able to update critical parts of the operating system without having to rely on the device manufacturers to release a full OS update. Users simply install these updates through the Google Play infrastructure.

Users will be happy to see that Android 11 will feature native support for waterfall screens that cover a device’s edges, using a new API that helps developers manage interactions near those edges.

Also new are some features that developers can use to handle conversational experiences, including a dedicated conversation section in the notification shade, as well as a new chat bubbles API and the ability to insert images into replies you want to send from the notifications pane.

Unsurprisingly, Google is adding a number of new privacy and security features to Android 11, too. These include one-time permissions for sensitive types of data, as well as updates to how the OS handles data on external storage, which it first previewed last year.

As for security, Google is expanding its support for biometrics and adding different levels of granularity (strong, weak and device credential), in addition to the usual hardening of the platform you would expect from a new release.

There are plenty of other smaller updates as well, including some that are specifically meant to make running machine learning applications easier, but Google specifically highlights the fact that Android 11 will also bring a couple of new features to the OS that will help IT manage corporate devices with enhanced work profiles.

This first developer preview of Android 11 is launching about a month earlier than previous releases, so Google is giving itself a bit more time to get the OS ready for a wider launch. Currently, the release schedule calls for monthly developer preview releases until April, followed by three betas and a final release in Q3 2020.

Adobe celebrates Photoshop’s 30th anniversary with new desktop and mobile features

Adobe’s Photoshop celebrates its 30th birthday today. Over that time, the app has pretty much become synonymous with photo editing and there will surely be plenty of retrospectives. But to look ahead, Adobe also today announced a number of updates to both the desktop and mobile Photoshop experiences.

The marquee feature here is probably the addition of the Object Selection tool in Photoshop on the iPad. It’s no secret that the original iPad app wasn’t exactly a hit with users as it lacked a number of features Photoshop users wanted to see on mobile. Since then, the company made a few changes to the app and explained some of its decisions in greater detail. Today,  Adobe notes, 50 percent of reviews give the app five stars and the app has been downloaded more than 1 million times since November.

With the Object Selection tool, which it first announced for the desktop version three months ago, Adobe is now bringing a new selection tool to Photoshop that is specifically meant to allow creatives to select and manipulate one or multiple objects in complex scenes. Using the company’s Sensei AI technology and machine learning, it gives users a lot of control over the selection process, even if you only draw a crude outline around the area you are trying to select.

Also new on the iPad are additional controls for typesetting. For now, this means tracking, leading and scaling, as well as formatting options like all caps, small caps, superscript and subscript.

On the desktop, Adobe is bringing improvements to the content-aware fil workspace to the app, as well as a much-improved lens blur feature that mimics the bokeh effect of taking an image with a shallow depth of field. Previously, the lens blur feature ran on the CPU and looked somewhat unrealistic, with sharp edges around out-of-focus foreground objects. Now, the algorithm runs on the GPU, making it far softer and foreground objects have a far more realistic look.

As for the improved content-aware fill workspace, Adobe notes that you can now make multiple selections and apply multiple fills at the same time. This isn’t exactly a revolutionary new feature, but it’s a nice workflow improvement for those who often use this tool.