Sound and Vision: Alexa Ambient Experience AI could be the next leap for TVs
OPINION: Amazon has launched its own brand TVs in the UK for the first time, and as you’d expect from a Fire TV product, they have all the smart bells and whistles you can think of – especially if you’re an Amazon subscriber.
Earlier this week I went to go see demos of the new TVs in person, and the biggest takeaway from the event was the use of artificial intelligence, or the Fire TV Ambient Experience as Amazon is referring to it as, to help make smarts the very core of the TV experience.
I think by now everyone is aware of ChatGPT’s rise and the effect it’s having on multiple industries. That’s not to say that the Ambient Experience will have a similar destabilizing effect on the TV market – artificial intelligence has been used by many TV brands and interfaces for years – but going forward this might have the most profound impact on how people interact with their TVs.
The first part is the Alexa Widgets, which appear on the screen when the TV is in its screensaver mode, and provide information to keep you informed of what tasks you have, the weather, music recommendations, a ‘What to eat’ widget and more. There are also Alexa routines to control and operate connected devices in the home. So far, I think we can consider that stuff to be pretty usual with Fire TV devices.
The most interesting part of the demo was the Dynamic Art feature. The idea of displaying art on the TV screen is not a new one, LG does it with its G-series OLEDs and Samsung has been doing it for years, in particular with its Frame QLED TVs.
It’s the ‘dynamic’ part of the feature that’s something I’ve not seen before. In a partnership with AI artist Sam Stubblefield (even the name sounds like it’s artificially generated), he’s created an algorithm that can adapt a piece of art to your environment in real time. The parameters for shifting include temperature, proximity, time of day, weather and others to create pieces of art that are claimed to be specific to you.
You could, for instance, have two people in different parts of the world start off with the same image, but because of these environmental factors, two different images will be generated over time. Whether it’s cool or strange is something left for the user to decide, but I don’t think you can say it’s not a novel or interesting way of looking at and perceiving art.
Another demonstration using Alexa’s artificial intelligence was even better. Daniel Rausch, Vice President, Entertainment Devices and Services at Amazon, set up a demo where he asked Alexa to create an image of the Northern Lights over Big Ben in London, and within a matter of seconds, she had created that exact image. Even more spectacular was when he asked Alexa to convert the image to look as if it’d been created by Vincent Van Gogh – and having visited the Van Gogh exhibition in Amsterdam earlier this year – even that master painter might have been impressed with what Alexa came up with.
Another exhibition of Alexa’s prowess was being able to find and select photos of a specific person within a photo library. At least one of the photos was tagged with the person’s name, but I’d say there’s something else going on behind the scenes for Alexa to find all the relevant photos – from a specific point in time, too – that might speak to the convenience or suspicions people have about artificial intelligence.
Nonetheless, it felt exciting, and perhaps that’s a cautionary tale about A.I., to not get too wrapped in and seduced by the relative ease by which it does things.
But there’s no doubt it’s impressive, and it is absolutely here to stay. The main question is, where do we go from here, and that’s something that we, the users, hopefully get a chance to influence.