Archives

Atari

‘Fallout Shelter’ joins Tesla arcade in latest software update

Nearly a year ago, Todd Howard, the director of Bethesda Games, said that the company’s “Fallout Shelter” game would be coming to Tesla displays. It arrived, via the 2020.20 software update, this week, which was first noted at driver’s platform Teslascope.

Fallout Shelter is the latest — and one of the more modern games — to join Tesla’s Arcade, an in-car feature that lets drivers play video games while the vehicle is parked. It joins 2048, Atari’s Super Breakout, Cuphead, Stardew Valley, Missile Command, Asteroids, Lunar Lander and Centipede. The arcade also includes a newly improved (meaning more difficult) backgammon game as well as chess.

The 2020.20 software update that adds the game, along with a few other improvements, hasn’t reached all Tesla vehicles yet, including the Model 3 in this reporter’s driveway (that vehicle has the prior 2020.16.2.1 update, which includes improvements to backgammon and a redesigned Tesla Toybox).

However, YouTube channel host JuliansRandomProject was one of the lucky few who did receive it and released a video that provides a look at Fallout and how it works in the vehicle. Roadshow also discovered and shared the JuliansRandomProject video, which is embedded below.

Fallout Shelter is just one of the newer features in the software update. Some functionality was added to the steering wheel so owners can use the toggle controls to play, pause and skip video playback in Theater Mode, the feature that lets owners stream Netflix and other video (while in park).

Tesla also improved Trax, which lets you record songs. Trax now includes a piano roll view that allows you to edit and fine tune notes in a track.

DeepMind’s Agent57 AI agent can best human players across a suite of 57 Atari games

Development of artificial intelligence agents tends to frequently be measured by their performance in games, but there’s a good reason for that: Games tend to offer a wide proficiency curve, in terms of being relatively simple to grasp the basics, but difficult to master, and they almost always have a built-in scoring system to evaluate performance. DeepMind’s agents have tackled board game Go, as well as real-time strategy video game StarCraft – but the Alphabet company’s most recent feat is Agent57, a learning agent that can beat the average human on each of 57 Atari games with a wide range of difficulty, characteristics and gameplay styles.

Being better than humans at 57 Atari games may seem like an odd benchmark against which to measure the performance of a deep learning agent, but it’s actually a standard that goes all the way back to 2012, with a selection of Atari classics including Pitfall, Solaris, Montezuma’s Revenge and many others. Taken together, these games represent a broad range of difficulty levels, as well as requiring a range of different strategies in order to achieve success.

That’s a great type of challenge for creating a deep learning agent because the goal is not to build something that can determine one effective strategy that maximizes your chances of success every time you play a game – instead, the reason researchers build these agents and set them to these tasks at all is to develop something that can learn across multiple and shifting scenarios and conditions, with the long-term aim of building a learning agent that approaches general AI – or AI that is more human in terms of being able to apply its intelligence to any problem put before it, including challenges it’s never encountered before.

DeepMind’s Agent57 is remarkable because it performs better than human players on each of the 57 games in the Atari57 set – previous agents have been able to be better than human players on average – but that’s because they were extremely good at some of the simpler games that basically just worked via a simple action-reward loop, but terrible at games that required more advanced play, including long-term exploration and memory, like Montezuma’s Revenge.

The DeepMind team addressed this by building a distributed agent with different computers tackling different aspects of the problem, with some tuned to focus on novelty rewards (encountering things they haven’t encountered before), with both short- and long-term time horizons for when the novelty value resets. Others sought out more simple exploits, figuring out which repeated pattern provided the biggest reward, and then all the results are combined and managed by an agent equipped with a meta-controller that allows it to weight the costs and benefits of different approaches based on which game it encounters.

In the end, Agent57 is an accomplishment, but the team says it can stand to be improved in a few different ways. First, it’s incredibly computationally expensive to run, so they will seek to streamline that. Second, it’s actually not as good at some of the simpler games as some simpler agents – even though it excels at the the top 5 games in terms of challenge to previous intelligent agents. The team says it has ideas for how to make it even better at the simpler games that other, less sophisticated agents, are even better at.

The dreaded 10x, or, how to handle exceptional employees

The “10x engineer.” Shudder. Wince. I have rarely seen my Twitter feed unite against an idea so loudly, or in such harmony.

I refer of course to the thread last month by Accel India’s Shekhar Kirani, explaining “If you have a 10x engineer as part of your first few engineers, you increase the odds of your startup success significantly” and then going on to address, in his opinion, “How do you spot a 10x engineer?”

The resulting scorn was tsunami-like. The very concept of a 10x engineer seems so… five years ago. Since then, the Valley has largely come to the collective conclusion that 1) there is no such thing as a 10x engineer 2) even if there were, you wouldn’t want to hire one, because they play so poorly with others.

The anti-10x squad raises many important and valid — frankly, obvious and inarguable — points. Go down that Twitter thread and you’ll find that 10x engineers are identified as: people who eschew meetings, work alone, rarely look at documentation, don’t write much themselves, are poor mentors, and view process, meetings, or training as reasons to abandon their employer. In short, they are unbelievably terrible team members.

Is software a field like the arts, or sports, in which exceptional performers can exist? Sure. Absolutely. Software is Extremistan, not Mediocristan, as Nassim Taleb puts it.