Sony has announced the world’s first image sensor with integrated AI smarts. The new IMX500 sensor incorporates both processing power and memory, allowing it to perform machine learning-powered computer vision tasks without extra hardware. The result says Sony, is going to be faster, cheaper, and safer AI cameras.
The IMX500 is destined for commercial clients, not consumer hardware
Over the past few years, devices starting from smartphones to surveillance cameras have benefited from the mixing of AI. Machine learning is often wont to not only improve the standard of the photographs we take but also understand video sort of a human would; identifying people and objects in the frame. The applications of this technology are huge (and sometimes worrying), enabling everything from self-driving cars to automated surveillance.
But many applications believe sending images and videos to the cloud to be analyzed. this will be a slow and insecure journey, exposing data to hackers. In other scenarios, manufacturers need to install specialized processing cores on devices to handle the additional computational demand, like new high-end phones from Apple, Google, and Huawei.
But Sony says its new image sensor offers a more streamlined solution than either of those approaches.
“There are other ways to implement these solutions,” Sony vice chairman of business and innovation Mark Hanson told The Verge, referencing edge computing, which uses dedicated AI chips not attached to the image sensor. “But I don’t believe they’re going to be anywhere on the brink of as cost-effective as us shipping image sensors within the billions.”
Sony’s huge presence within the image processing market will definitely push this technology to clients at an enormous scale. Hanson notes that the corporate has quite 60 percent market share, and shipped about 1.6 billion sensors last year, including for all three cameras in Apple’s iPhone 11 Pro.
This first-generation AI image sensor, though, is unlikely to finish up in consumer devices like smartphones and tablets, a minimum of to start with. Instead, Sony is going to be targeting retailers and industrial clients, with Hanson referencing Amazon’s cashier-less Go stores as a possible application.
In Amazon’s Go stores, the retailer uses many AI-enabled cameras to trace shoppers and charge them for objects they grab from the shelves. “They put many cameras, and they’re running petabytes of knowledge, on a day to day through a little convenience store,” says Hanson. Reports suggest that the resulting hardware costs have slowed the roll-out of those stores. “But if we will miniaturize that capability and put it on the backside of a chip we will do all kinds of interesting things.”In addition to cost savings, there are privacy benefits. If the AI chip is stuck directly onto the rear of the image sensor then object detection is often done on-device. rather than sending off data to be analyzed, either to the cloud or a close-by processor, the image sensor itself performs whatever AI analysis is important and easily produces the metadata instead.
Benefits include greater privacy and faster processing speeds
So, if you would like to make a sensible camera that detects whether or not someone is wearing a mask (a very real concern right now) then an IMX500 image sensor is often loaded with the relevant algorithm which allows the camera to send off quick “yes” or “no” pings.
“Now we’ve eliminated what would normally be 60 frames per second, 4K video stream to only that one ‘hey, I recognize this object,’” says Hanson. “That can reduce data traffic [and] it also helps things like privacy.”
Another big application is industrial automation, where image sensors are needed to assist so-called co-bots — robots designed to figure in close proximity to humans — from bashing their flesh-and-blood colleagues. Here the most advantage of an integrated AI image sensor is speed. If a co-bot detects a person’s where they shouldn’t be and wishes to return to a fast stop, then processing that information as quickly as possible is paramount
Sony says the IMX500 is far faster for these kinds of tasks than many other AI cameras, with the power to use a typical image recognition algorithm (MobileNet V1) to one video frame just 3.1 milliseconds. By comparison, says Hanson, competitors’ chips, like those made by the Intel-owned Movidius (which are utilized in Google’s Clips camera and DJI’s Phantom 4 drone) can take many milliseconds — even seconds — to process.
The big bottleneck, though, is that the ability of the IMX500 to handle more complex analytical tasks. Right now, says Hanson, the image sensor can only work with pretty “basic” algorithms. meaning that more sophisticated and varied tasks, like driving an autonomous car, will definitely require dedicated AI hardware for the foreseeable future. Instead, consider the IMX500 as an easy, single-application device.
But this is often only the primary generation, and therefore the technology will undoubtedly improve in the future. Right now, cameras are smarter because they send their data to computers. within the future, the camera itself is going to be the pc and every one the smarter for it.
Test samples of the IMX500 have already started shipping to early customers with prices starting at ¥10,000 ($93). Sony expects the primary products using the image sensor to arrive within the half-moon of 2021.