Share on Social Media:

20,000 Devices Support Amazon’s Alexa

Amazon’s famous artificial intelligence (A I) platform has become a force in the consumer market. At last week’s IFA consumer electronics conference in Berlin, Amazon announced that its Alexa app now works with more than 20,000 devices. This is an impressive advance, given that the firm said only last January that Alexa worked with 4,000 devices. A fivefold increase in eight months is almost unheard of for any product.

Image result for alexa images

Daniel Rausch, an Amazon executive, said, “Alexa has sung Happy Birthday millions of times to customers, and she’s told over 100 million jokes.”

20,000 Devices = Fivefold Increase in Eight Months

Rausch confirmed that Alexa works with more than 20,000 devices made by 3,500 manufacturers.

Amazon manufactures its own Alexa devices, including the Echo smart speakers,  Fire TV. and Fire tablets. But the company has been trying trying to get the app into as multiple third-party devices.

What is Alexa?

Alexa is an artificial intelligence, or machine learning, app. It works in phones, speakers, TV sets, thermostats, and even cars. At this year’s IFA conference, Netgear and Huawei announced that the app would be in their home routers. Amazon said it wants to bring the app into hotels and offices.

Alexa now has more than 50,000 skills. Hundreds of thousands of developers in 180 countries work with it. Many more are coming.

Rausch is especially proud of Alexa’s voice control functions. “It turned out that your smart phone is actually a pretty terrible remote control for your house,” he said. “You don’t want to fish around in your pocket, open applications, unlock your phone to control the device right in front of you. Voice has truly unlocked the smart home. That’s because its actually simpler.”

“You won’t need a manual”, Rausch said, “becasue our devices learn about you, not the other way around.”

What are Amazon’s competitors doing?

Alexa is not alone in its market. It competes with Apple’s Siri, Google’s Home Assistant, and Microsoft’s Cortana. Alexa is by far the most successful, though. Other providers are scrambling to replicate its market penetration, and likely will take years to catch up. Still, they are moving energetically to get their apps into laptops, phones, appliances- even vehicles.

Amazon seems unworried about its competitors. Its Echo smart speaker leads the voice assistant market by a wide margin. And Rausch says his company “has barely scratched the surface” of what voice control can do.

Getting into 20,000 devices in four years is an impressive feat. But for Amazon, evidently, it’s just the beginning.

(For the most reliable internet connection, shop with Satellite Country. Talk to us. We can help.)

Share on Social Media:

IBM PREDICTS FUTURE TECHNOLOGIES

Image result for superman x ray vision

This morning, IBM Research released a report predicting five major innovations that will affect our lives profoundly by 2022. Among its predictions are:

  • Artificial Intelligence and Mental Health:   Computers will analyze patient speech and written words. Anomalies will reveal developmental disorders, mental illness, or neurological disease. Medical personnel will be able to track these conditions in real time, without having to wait for the patient to visit the clinic for a checkup. A I tracking through wearable devices will complement drug therapy and clinical treatment.
  • Superhero Vision:    Our eyes detect less than 1% of the electromagnetic spectrum. With hyperimaging tools and A I, though, we could ‘see’ far more than is revealed in visible light. With portable devices, we could sense hidden opportunities or threats. Our cars could ‘see’ through rain or fog, detect invisible hazards such as black ice, and tell us the distance and size of objects in our paths.
  • Macroscopes:   With machine learning and software, we could organize information about the physical world. Billions of devices within our range of vision will gather massive and complex data. This is what IBM calls the ‘macroscope’. It will enable us to read and instantly analyze the useful data all around us, while filtering out irrelevancies.
  • Medical Lab on a Chip:   By analyzing body fluids, devices you carry or wear will tell you if you need to see a physician. A single chip will handle all of the detection and analysis that currently requires a full biochemistry labs.
  • Smart Sensors that Detect Pollution:    With much more sensitive sensors, we could easily detect storage and pipeline leaks. Even the most minute and invisible leaks could be caught in real time. Sensors will report problems at the speed of light.

In previous reports, IBM predicted classrooms that learn you, touching through your phone, and computers with a sense of smell.

 (To take full advantage of emerging technologies, you need a reliable internet connection. Talk to us. We can help.)

Share on Social Media:

MACHINE PREDICTS HUMAN BEHAVIOR IN VIDEO

Most of us can predict what will happen just after we see two people meet: a handshake, a punch, a hug, or a kiss. We’ve honed this ability through decades of experience in dealing with people. Our ‘intuition’ is thoroughly trained.

A machine, no matter how competently programmed, has trouble evaluating such complex information.

If computers, though, could predict human action reliably, they would open up a host of possibilities. We might wear devices that will suggest responses to differing situations. We might have emergency response systems to predict breakdowns or security breaches. Robots will better understand how to move and act among humans.

in June, M.I.T.’s Computer Science and Artificial Intelligence Laboratory (CSAIL) announced a huge breakthrough in the field. Researchers there developed an algorithm for what they call ‘predictive vision’. It can predict human behavior much more accurately than anything that came before.

The system was trained with YouTube videos and TV shows, including The Office and Desperate Housewives. It can predict when two characters will shake hands, hug, kiss, or ‘high five’. It also predicts what objects will appear in a video five seconds later.

Previous approaches to ‘predictive vision’ have followed one of two patterns. One is to examine the pixels in an image. From this data, the machine tries to construct a future image, pixel by pixel. MIT’s lead researcher in this project calls this process “difficult for a professional painter, much less an algorithm”.

The second approach is for humans to label images for the computers in advance. This is practical only on a very small scale.

MIT’s CSAIL team instead offered the machine “visual representations”. These were freeze-frame alternate versions of how a scene might appear. “Rather than saying that one pixel is blue, the next one is red… visual representations reveal information about the larger picture, such as a certain collection of pixels that represents a human face”, the lead researcher said.

CSAIL uses ‘neural networks’ to teach computers to scan massive amounts of data. From this, the computers find patterns on their own.

CSAIL trained its algorithm with more than 600 hours of unlabeled video. Afterward, the team tested it on new video featuring objects and human action.

Though CSAIL’s algorithm was not as accurate as humans in predicting human behavior, it is a huge advance over what came before. Very soon, it’s likely to outperform humans. When it does, its impact on our lives could be revolutionary.

(Editor’s note: machine learning is another term for artificial intelligence. The enclosed image is the cast of ‘The Big Bang Theory’.)

(Get the most out of information technology. Get the most out of your machines. For this, you need a strong web connection. Talk to us. We can help.)

Share on Social Media:

TRAINING YOUR COMPUTER- LIKE A DOG

To most of us, computer coding is an inscrutable art. Code writers are the high priests of the Information Age, a technical elite whose work is so far beyond our understanding it seems to be magic. They even speak a different language.

This may be changing. With recent advances in artificial intelligence, your next computer might not need written software or OS code. Instead, you can look forward to training the machine- like a dog.

Conventional programming is writing of detailed, step-by-step instructions. Any errors or omissions in the code will affect the computer’s functions– and errors cannot be corrected without rewriting the code. Operating system developers, most notably Microsoft, often have to issue downloadable “patches” to repair defective code. Some systems, such as Windows 8, are so bloated and error-prone that they are beyond salvage, and have to be withdrawn from the market. The coding protocol is unforgiving. “Garbage in; garbage out”, is an industry watchword for a reason. The computer cannot learn, and cannot correct its mistakes. It can do only what the code has taught it to do.

With machine learning, your computer won’t be coded with a comprehensive set of instructions. It will be trained, and you very likely will have a big hand in training it. As Edward Monaghan wrote for Wired, “If you want to teach a neural network to recognize a cat, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands… of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.”

Machine learning has been with us, in concept, for several decades. It has become practical only recently, though, with revolutionary advances in the development of neural networks, systems modeled on the complex array of neurons in the brain. Machine learning already shapes much of our online activity. Skype Translator translates speech into different languages in real time. The collision-avoidance systems in self-driving cars are neural networks. So is the facial identification feature in Google Photos. Facebook’s algorithm for adjusting user news feeds is a neural network. Even Google’s world-dominating search engine, long a monument to the power of the human coder, has begun to depend heavily on machine learning. In February, Google signaled its commitment to it by replacing the veteran chief of its search engine with John Giannandrea, one of the world’s leading experts in neural networks and artificial intelligence.

Giannandrea hit the ground running. He has devoted Herculean effort to training Google’s engineers in machine learning. “By building these learning systems”, he said last fall, “we don’t have to write these rules anymore.”

Our increased reliance on neural networks will bring radical changes in the role and status of the programmer. The code writer understood precisely how the computer functioned, since he wrote every line of its instructions. It could do nothing he hadn’t told it to do. With machine learning, though, he’s not entirely sure how it performs its assigned tasks. His relationship with it is no longer that of a god exercising absolute rule over his creation; it’s more like the relationship between parent and child, or a dog owner and his dog. Such relationships always entail a certain amount of mystery.

Your computer’s training will not end with your purchase of it. You will teach it what functions you want, how you want them carried out, even the quirks in your personality. It will get continually ‘smarter’ as it adapts to your feedback. You will be training your computer for its entire operating life.

Danny Hillis, writing for The Journal of Design and Science, said, “Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle- and it has a life of its own.”

(Training your computer will require a reliable internet connection. Is yours adequate? If it isn’t, talk to us. We can help.)