Share on Social Media:

Are You Ready for the Coming Age of Mass Genius?

Some tech experts believe the intelligence of the human race is about to skyrocket. Some of you, we know, are thinking: “And not a moment too soon!”

Image result for einstein

What would account for this ballistic bulge in bubba’s brainpower?

Peter Diamandis thinks he knows. Diamandis holds degrees in molecular genetics and aerospace engineering from MIT, and made his reputation as the best-selling author of Abundance: The  Future Is Better than You Think.  He says the growth of internet connectivity, the cloud, and maturing brain-computer interfaces will bring dramatic acceleration of mass genius. This includes both individual and collective intelligence. Not only will the world at large become smarter, each of us will become a genius.

Mass Genius through Connectivity

The first factor Diamandis cited is connectivity. For most of history, he said, the greatest intellects have been squandered. Many were hindered by barriers of sex, race, ethnicity, class, and culture. Most, though, simply lacked means to communicate their insights to the world.

The coffee houses founded in eighteenth century Britain and continental Europe played a critical role in  destroying these barriers. In the coffee houses, people from all classes and vocations met to discuss ideas, debate them, and refine their own ideas based on the feedback they got from others. The intellectual ferment in the coffee house culture fostered the Enlightenment and the Industrial Revolution.

Concentrating population in large urban centers extended the idea generating power of the coffee house to many more people.

Diamandis says the internet is our current version of the eighteenth century coffee house and the urban center– but is many times more powerful than both. Our current networks need not be confined to our neighborhoods or our cities; they can now encompass the entire globe.  More than four billion people now have internet connections. Soon all of us will.

The Cloud and Brain-Computer Interfaces

The second factor, Diamandis says, is the cloud, which will be enhanced by braincomputer interfaces. The author says we will soon be able to upload our thoughts to the cloud, and download information directly to our brains. We then can bypass the usual cumbersome learning process. Research will become more efficient by several orders of magnitude, because it will be rooted in what Diamandis calls “the neurological basis for innovation”.

Is Diamandis right about this? We should certainly hope so. We wouldn’t be burdened with so many selfies or cat videos on social media. We might even hear Joy Behar or Barbra Streisand say something sensible.

To tap your own genius, you need a reliable internet connection. For the one that works best for you, call Satellite Country. We can help.

Call 1-855-216-0185

Share on Social Media:

20,000 Devices Support Amazon’s Alexa

Amazon’s famous artificial intelligence (A I) platform has become a force in the consumer market. At last week’s IFA consumer electronics conference in Berlin, Amazon announced that its Alexa app now works with more than 20,000 devices. This is an impressive advance, given that the firm said only last January that Alexa worked with 4,000 devices. A fivefold increase in eight months is almost unheard of for any product.

Image result for alexa images

Daniel Rausch, an Amazon executive, said, “Alexa has sung Happy Birthday millions of times to customers, and she’s told over 100 million jokes.”

20,000 Devices = Fivefold Increase in Eight Months

Rausch confirmed that Alexa works with more than 20,000 devices made by 3,500 manufacturers.

Amazon manufactures its own Alexa devices, including the Echo smart speakers,  Fire TV. and Fire tablets. But the company has been trying trying to get the app into as multiple third-party devices.

What is Alexa?

Alexa is an artificial intelligence, or machine learning, app. It works in phones, speakers, TV sets, thermostats, and even cars. At this year’s IFA conference, Netgear and Huawei announced that the app would be in their home routers. Amazon said it wants to bring the app into hotels and offices.

Alexa now has more than 50,000 skills. Hundreds of thousands of developers in 180 countries work with it. Many more are coming.

Rausch is especially proud of Alexa’s voice control functions. “It turned out that your smart phone is actually a pretty terrible remote control for your house,” he said. “You don’t want to fish around in your pocket, open applications, unlock your phone to control the device right in front of you. Voice has truly unlocked the smart home. That’s because its actually simpler.”

“You won’t need a manual”, Rausch said, “becasue our devices learn about you, not the other way around.”

What are Amazon’s competitors doing?

Alexa is not alone in its market. It competes with Apple’s Siri, Google’s Home Assistant, and Microsoft’s Cortana. Alexa is by far the most successful, though. Other providers are scrambling to replicate its market penetration, and likely will take years to catch up. Still, they are moving energetically to get their apps into laptops, phones, appliances- even vehicles.

Amazon seems unworried about its competitors. Its Echo smart speaker leads the voice assistant market by a wide margin. And Rausch says his company “has barely scratched the surface” of what voice control can do.

Getting into 20,000 devices in four years is an impressive feat. But for Amazon, evidently, it’s just the beginning.

(For the most reliable internet connection, shop with Satellite Country. Talk to us. We can help.)

Share on Social Media:

ROBOTS:  WILL THEY TAKE OUR JOBS?

Related image

Apocalypse by Robots is a recurring theme in technical publications and science fiction. As our tools become more sophisticated and able to learn, the more alarmist writers tell us, they might attack us. A machine programmed to make paper clips might try to turn the entire world into a paper clip factory. Robots programmed to find their own power sources could deny us the power we need for survival. Robots could be deadly.

Some of the less excitable tech writers dismiss such alarms,. They still say, though, that say automation will foster mass unemployment. In fact, we’ll need a guaranteed minimum income to save the hordes of technologically unemployed from rioting in the streets because they can’t support themselves. MIT’s Technology Review, Wired, Gizmodo, The Verge, Singularity Hub, Mashable, Ars Technica- almost every technical rag echoes the same theme.

There are a few dissenting voices, but almost every article addressing the subject warns that automation will destroy far more jobs than it will create. In the past, technical development has only disrupted job markets for the short term, and in the long run has created far more jobs– and far more remunerative jobs-  than it has destroyed.

But this time it’s different, the alarmists say. We can’t use the Industrial Revolution or the dawn of the Information Age as our model. The big difference now is artificial intelligence or machine learning. As our tools learn from ’experience’, instead of just responding to specific inputs, the need for direct human control nearly vanishes. A small technical and financial elite will control almost everything, and will become fantastically wealthy. The rest of us will be mired in poverty, permanently shut out from the labor force.

How Have Robots Affected Job Markets Before?

This certainly is a grim prospect. But is it likely?

We doubt it. Suppose we concede that the distant past has nothing to teach us about out own futures. We’ll look into just the rise of robotics in the last sixty years. In all that time, robots have finally and irrevocably destroyed only one job category, elevator operators. But automation has created more jobs for elevator engineers and repairmen.

We’ve seen the same trend in other industries. Replacement of land lines with mobile phones has radically altered the work of telecom technicians, but has not made them obsolete. Replacing cathode ray tubes with LCD, LED, and OLED TV sets radically shrank the market for TV repairmen, but created new jobs for electronics designers and coders. The waning influence of broadcast TV networks has opened new markets in cable TV, satellite TV, and streaming video,. It has created more demand for content– and for content creators.

Automation has brought us an enormous blessing: assignment of the most dangerous, dirty, exhausting, and boring tasks to machines. This leaves us with far less onerous work, often in air-conditioned comfort. Machine learning will accelerate this trend. The tasks we handle in the future might not be what we call ‘work’ today. They might even seem like play. But suppose you could enter a time machine, and could talk with a farmer or a merchant living two centuries ago. If you describe your current job to him, will he understand it? Will he consider it work? Not likely. He’ll probably think you’re just playing.

What Can You Do?

This doesn’t mean you should be complacent. If you’re unprepared, a rapidly changing job market can hurt you badly. Your best job insurance is continually upgrading your skills.

Above all else, learn how to learn. We can’t always predict what occupations will be in demand. Students who spend years preparing for specific jobs in trendy fields often find, not long after they graduate, that their hard-won skills are obsolete. If you have solid communication, math, and reasoning skills, and if you know a fair amount about literature and history, you have a huge advantage over others. What you don’t know, you can learn quickly.

With a nimble mind and a solid work ethic, you probably don’t need to fear competition by robots.

(If you need a reliable internet connection, talk to us. We can help.)

Share on Social Media:

IBM PREDICTS FUTURE TECHNOLOGIES

Image result for superman x ray vision

This morning, IBM Research released a report predicting five major innovations that will affect our lives profoundly by 2022. Among its predictions are:

  • Artificial Intelligence and Mental Health:   Computers will analyze patient speech and written words. Anomalies will reveal developmental disorders, mental illness, or neurological disease. Medical personnel will be able to track these conditions in real time, without having to wait for the patient to visit the clinic for a checkup. A I tracking through wearable devices will complement drug therapy and clinical treatment.
  • Superhero Vision:    Our eyes detect less than 1% of the electromagnetic spectrum. With hyperimaging tools and A I, though, we could ‘see’ far more than is revealed in visible light. With portable devices, we could sense hidden opportunities or threats. Our cars could ‘see’ through rain or fog, detect invisible hazards such as black ice, and tell us the distance and size of objects in our paths.
  • Macroscopes:   With machine learning and software, we could organize information about the physical world. Billions of devices within our range of vision will gather massive and complex data. This is what IBM calls the ‘macroscope’. It will enable us to read and instantly analyze the useful data all around us, while filtering out irrelevancies.
  • Medical Lab on a Chip:   By analyzing body fluids, devices you carry or wear will tell you if you need to see a physician. A single chip will handle all of the detection and analysis that currently requires a full biochemistry labs.
  • Smart Sensors that Detect Pollution:    With much more sensitive sensors, we could easily detect storage and pipeline leaks. Even the most minute and invisible leaks could be caught in real time. Sensors will report problems at the speed of light.

In previous reports, IBM predicted classrooms that learn you, touching through your phone, and computers with a sense of smell.

 (To take full advantage of emerging technologies, you need a reliable internet connection. Talk to us. We can help.)

Share on Social Media:

MACHINE PREDICTS HUMAN BEHAVIOR IN VIDEO

Most of us can predict what will happen just after we see two people meet: a handshake, a punch, a hug, or a kiss. We’ve honed this ability through decades of experience in dealing with people. Our ‘intuition’ is thoroughly trained.

A machine, no matter how competently programmed, has trouble evaluating such complex information.

If computers, though, could predict human action reliably, they would open up a host of possibilities. We might wear devices that will suggest responses to differing situations. We might have emergency response systems to predict breakdowns or security breaches. Robots will better understand how to move and act among humans.

in June, M.I.T.’s Computer Science and Artificial Intelligence Laboratory (CSAIL) announced a huge breakthrough in the field. Researchers there developed an algorithm for what they call ‘predictive vision’. It can predict human behavior much more accurately than anything that came before.

The system was trained with YouTube videos and TV shows, including The Office and Desperate Housewives. It can predict when two characters will shake hands, hug, kiss, or ‘high five’. It also predicts what objects will appear in a video five seconds later.

Previous approaches to ‘predictive vision’ have followed one of two patterns. One is to examine the pixels in an image. From this data, the machine tries to construct a future image, pixel by pixel. MIT’s lead researcher in this project calls this process “difficult for a professional painter, much less an algorithm”.

The second approach is for humans to label images for the computers in advance. This is practical only on a very small scale.

MIT’s CSAIL team instead offered the machine “visual representations”. These were freeze-frame alternate versions of how a scene might appear. “Rather than saying that one pixel is blue, the next one is red… visual representations reveal information about the larger picture, such as a certain collection of pixels that represents a human face”, the lead researcher said.

CSAIL uses ‘neural networks’ to teach computers to scan massive amounts of data. From this, the computers find patterns on their own.

CSAIL trained its algorithm with more than 600 hours of unlabeled video. Afterward, the team tested it on new video featuring objects and human action.

Though CSAIL’s algorithm was not as accurate as humans in predicting human behavior, it is a huge advance over what came before. Very soon, it’s likely to outperform humans. When it does, its impact on our lives could be revolutionary.

(Editor’s note: machine learning is another term for artificial intelligence. The enclosed image is the cast of ‘The Big Bang Theory’.)

(Get the most out of information technology. Get the most out of your machines. For this, you need a strong web connection. Talk to us. We can help.)

Share on Social Media:

TRAINING YOUR COMPUTER- LIKE A DOG

To most of us, computer coding is an inscrutable art. Code writers are the high priests of the Information Age, a technical elite whose work is so far beyond our understanding it seems to be magic. They even speak a different language.

This may be changing. With recent advances in artificial intelligence, your next computer might not need written software or OS code. Instead, you can look forward to training the machine- like a dog.

Conventional programming is writing of detailed, step-by-step instructions. Any errors or omissions in the code will affect the computer’s functions– and errors cannot be corrected without rewriting the code. Operating system developers, most notably Microsoft, often have to issue downloadable “patches” to repair defective code. Some systems, such as Windows 8, are so bloated and error-prone that they are beyond salvage, and have to be withdrawn from the market. The coding protocol is unforgiving. “Garbage in; garbage out”, is an industry watchword for a reason. The computer cannot learn, and cannot correct its mistakes. It can do only what the code has taught it to do.

With machine learning, your computer won’t be coded with a comprehensive set of instructions. It will be trained, and you very likely will have a big hand in training it. As Edward Monaghan wrote for Wired, “If you want to teach a neural network to recognize a cat, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands… of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.”

Machine learning has been with us, in concept, for several decades. It has become practical only recently, though, with revolutionary advances in the development of neural networks, systems modeled on the complex array of neurons in the brain. Machine learning already shapes much of our online activity. Skype Translator translates speech into different languages in real time. The collision-avoidance systems in self-driving cars are neural networks. So is the facial identification feature in Google Photos. Facebook’s algorithm for adjusting user news feeds is a neural network. Even Google’s world-dominating search engine, long a monument to the power of the human coder, has begun to depend heavily on machine learning. In February, Google signaled its commitment to it by replacing the veteran chief of its search engine with John Giannandrea, one of the world’s leading experts in neural networks and artificial intelligence.

Giannandrea hit the ground running. He has devoted Herculean effort to training Google’s engineers in machine learning. “By building these learning systems”, he said last fall, “we don’t have to write these rules anymore.”

Our increased reliance on neural networks will bring radical changes in the role and status of the programmer. The code writer understood precisely how the computer functioned, since he wrote every line of its instructions. It could do nothing he hadn’t told it to do. With machine learning, though, he’s not entirely sure how it performs its assigned tasks. His relationship with it is no longer that of a god exercising absolute rule over his creation; it’s more like the relationship between parent and child, or a dog owner and his dog. Such relationships always entail a certain amount of mystery.

Your computer’s training will not end with your purchase of it. You will teach it what functions you want, how you want them carried out, even the quirks in your personality. It will get continually ‘smarter’ as it adapts to your feedback. You will be training your computer for its entire operating life.

Danny Hillis, writing for The Journal of Design and Science, said, “Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle- and it has a life of its own.”

(Training your computer will require a reliable internet connection. Is yours adequate? If it isn’t, talk to us. We can help.)

Share on Social Media:

Will robots replace us in the labor market? With accelerating automation, it may sometimes seem that our jobs are doomed.

Robots deliver pizza. Google has developed cars that drive themselves. This is only the tip of an emerging iceberg.

Two years ago, Momentum Machines developed a robot that could provide freshly ground and grilled hamburgers to order, with freshly sliced vegetable toppings, and customized meat or seasoning combinations. If a customer wants a meat patty with one-third bison and two-thirds pork, the robot will provide it. And it can produce 360 custom burgers per hour.

A few years ago, the Los Angeles Times began using an artificial intelligence application to write weather and earthquake updates. Afterward, the AI app wrote sports articles. The newspaper tested the app by asking readers to compare articles written by the robot with articles written by human reporters. Very few could tell the difference.

If these examples aren’t daunting enough, some researchers believe that artificial intelligence, the internet of things, and virtual reality will make most human jobs obsolete within a decade or two. Robots, we are told, will handle so many of the tasks that now require human labor, very few jobs are likely to survive. Machines will be able to learn, and will constantly become more competent. Eventually, they will know so much that they won’t need human supervision. Some analysts argue that we’ll need a universal minimum income, so the hordes of displaced workers can survive.

These frightening prophecies, though, are out of touch with reality. We’ve been through technological revolutions before- and they’ve paved the way for more jobs, not fewer.

By inventing mechanical molds and the movable type press, Johannes Gutenberg drove thousands of European scribes out of their vocations. But his invention created new industries. It made the mass production of books and pamphlets possible, and without it the newspaper industry would never have existed. The movable type press killed thousands of jobs, and created millions more.

The automation of agriculture was even more disruptive to labor markets. In the nineteenth century, four out of five American jobs were on ranches or farms. Today, fewer than 3% are. Automated farming freed millions of people for other, less onerous work at higher wages.

We are at the verge of the next great leap in technology. It will, no doubt, destroy tens of millions of jobs. Some workers are likely to be displaced for months, some for years. Transitions to the new information-based economy are going to be difficult. For every job the robots destroy, though, they’ll create several more. A 2011 study by the International Federation of Robotics found that the use of one million industrial robots led directly to creation of three million jobs. Increased use of robots usually fosters lower unemployment

The jobs that survive the robot revolution are likely to be the ones requiring creativity, empathy and human connections, negotiation and persuasion- and repair and maintenance of robots. We are certain to see more job openings in science, technology, engineering, and math fields. As robots handle more of our repetitive tasks, we will have more opportunity for easier and more interesting work.

Welcome the robots. More than likely, they are your friends.

(To benefit from automation, you need current information. For this, a reliable internet connection is necessary. Talk to us. We can help.)

Share on Social Media:

MEMORY BY GOOGLE

Have you ever forgotten a business appointment? Have you ever forgotten your spouse’s birthday? Have you ever forgotten your most important point while briefing your boss about a critical project?

Memory often fails us when we need it most. Within a few years, though, you might not need it. Machines will remember what you need to know.

Last month, IBM patented an algorithm it calls an “automatic Google for the mind”. It could track your behavior and speech, analyze your intentions, and, discerning when you seem to have lost your way, offer suggestions to prod your memory. Dr. James Kozlowski, a computational neuroscientist for IBM Research, is the lead researcher for the automated memory project. Kozlowski says he helped develop his company’s new ‘cognitive digital assistant’ for people with severe memory impairment, but it could help all of us with research, brainstorming, recovering lapsed memories, and forming creative connections.

IBM’s new cognitive tool tackles the most common cause of memory failure: absence of context. Memory, for most of us, is a web of connections. Remembering a single aspect of an experience, we can call up others. To remember is to find the missing piece in a puzzle. If you can’t find the first clue, you can’t find the second, and you don’t have a mental map for the information you need.

Dr. Kozlowski says IBM has found the solution for our memory failures. His cognitive assistant models our behaviors and memories. It hears our conversations, studies our actions, and draws conclusions about our intentions from our behavior and speech patterns, and our conversations with others. From this data, it can discern when we have trouble with recall. It then will guess what we want to know, suggesting names and biographical data within milliseconds. By studying our individual quirks, it will learn what behavior is normal for us, and when we need help.

Synced with your phone, the automated cognitive assistant would search its database of phone numbers to find out who’s calling you. Before you answer, the assistant will display the caller’s name, highlights of your recent conversations, and important events in the caller’s life. At a business meeting, your digital assistant will, on hearing certain words, recall related points mentioned in past meetings, and your research on the subject. It will display them on your mobile device, or ‘speak’ them into an earpiece.

It’s likely to be several years before IBM’s automated cognitive assistant is in common use. A few bugs stand in the way of commercialization, but it’s still an impressive achievement.

Share on Social Media:

REPLACING THE PASSWORD

Security is one of our most important concerns in use of the internet. Carelessness can expose our devices to malware and hacking, and we risk our bank accounts and our identities.

The password is a partial solution, our best attempt to limit the risk in internet use. It’s not a perfect defense, though, and it brings its own drawbacks. Passwords that are easy to remember may also be easy for hackers to guess. More difficult passwords we can forget more easily, and we can be locked out of our devices or our secured sites. With multiple passwords, we compound the burden on memory.

In the future, even the best, most complex passwords may not be adequate defenses. As hackers gain access to ever more processing power, brute force attacks could overcome even our most sophisticated encryption efforts. What, then, can we do?

In the long run, replacing the password may be our only realistic chance of protecting our data, our money, and our identities. But what will you replace your password with?

One of the most promising new security protocols is use of biometric data. Replacing your password with a fingerprint, a facial scan, or an iris scan would save having to remember a complex code. A hacker can’t duplicate your features, your fingerprint, or your retinas. It wouldn’t matter how much processing power he had. Without physical access to your computer, he couldn’t break the code.

Dell, Microsoft, Digital Persona, and a few other vendors sell fingerprint scanners for computer security. All sell at retail for less than $80.00. One sells for less than $20.00. After installing your scanner, you can log in just by pressing your finger in the designated slot. You’ll never need a login password again.

Iris or retinal scanners are commonly used for airport and military security. They are too expensive for most consumer uses, but this is expected to change. Improvements in sensor technology will drive prices downward.

One of the most important technologies replacing the password will be machine learning. Ray Kurzweil, one of the most famous computer scientists, as well as a prominent author, inventor, and futurist, said that in the future “the machine will learn you”. Advanced software algorithms will learn the habits of computer users. Eventually, your computer will know your patterns of use and the cadence of your keystrokes. Your computer could detect attempted hacking simply because the hacker’s use patterns will differ from yours. No other security protocol will be necessary.

For now, replacing your computer passwords with more advanced security tools requires time, effort, or money. Before long, you won’t need to expend extra effort or money, as all computers and (legitimate) websites will have adequate security tools built in.

Meanwhile, you may have to rely on your memory.