LETTER FROM THE EDITOR
posted: Jan 10, 2016
Welcome to AI Trends!
For those of you interested, a good short read on the history of artificial intelligence (AI) can be found on Wikipedia.
It’s hard to get introduced into AI without first asking some fundamental questions about its history, and frankly, wondering yourself whether or not AI is really possible.
Using the Wikipedia reference above, here’s a quick history of AI, the AI Winters that we have witnessed since its inception, how my professional path crossed into and out of AI several times, and why we are publishing AI Trends.
The modern era of AI began with the Turing test which was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: “I propose to consider the question, ‘Can machines think?'” The proof that emulating human thinking was possible, was that if a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”.
The field of AI research was founded as an academic discipline in 1956. The actual term “artificial intelligence” was created at a conference held at Dartmouth College that year. John McCarthy and Marvin Minsky (who passed away last month) started the MIT Artificial Intelligence lab with $50,000. John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research
Looking back, the AI winters which eventually resulted, are based largely in part on the optimism of the industry’s leading thinkers (then and throughout the history of AI). The first generation of AI researchers made these predictions about their work:
- 1958, A. Simon and Allen Newell: “within ten years a digital computer will be the world’s chess champion” and “within ten years a digital computer will discover and prove an important new mathematical theorem.”
- 1965, A. Simon: “machines will be capable, within twenty years, of doing any work a man can do.”
- 1967, Marvin Minsky: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”
- 1970, Marvin Minsky (in Life Magazine): “In from three to eight years we will have a machine with the general intelligence of an average human being.”
In the early 1960’s millions flowed into AI research, and by the mid 1970’s the first AI Winter kicked in. AI funding largely ended until the early 1980’s, which then witnessed the rise of AI again, mostly through the promise of expert systems, whereby knowledge engineers could capture the essential rules used by human experts in specific areas (domains) which could then be codified into software, and ultimately used by doctors, insurance underwriters, financial advisors, etc. to automate and improve upon human experts.
It was here that I personally got involved with AI. Having been completely bored with my profession as a CPA, I went back to college and earned an M.S. in Computer Engineering and was offered a research fellowship in AI via a joint program between Boston University, MIT and GE. At GE, I worked on “AI” projects which focused on building expert systems that would enhance the process of jet engine turbine blade inspections carried out at GE’s jet engine manufacturing facility in Cincinnati, OH. The hope was that we would be able to use the AI languages of the time (LISP and Prolog), to drive the automated quality control systems (which comprised of XRAY imaging, visual inspection and FPIM or fluorescent penetrant inspection module). All of this was previously driven by a series of Digital Equipment Corp (DEC) PDP 11’s which controlled these inspection systems and whose code was written in FORTRAN (frankly of which after our research, remained on FORTRAN). LISP and Prolog weren’t up to the task, nor were the AI hardware systems of the time (Symbolics and LISP machines).
Wanting to apply my knowledge of AI and expert systems in business, I then went on to head up an advanced software technology group at Hanover Insurance company (which at the time, as it remains today, was a top 10 casualty insurance company). We used AION, an expert system software platform (today owned by CA Technologies)
Our team succeeded in deploying one of the country’s first underwriting expert systems. It was at that point unknowingly that I was about to begin my transition into becoming a serial entrepreneur. I was asked to chair a conference on AI in Financial Services, and write for several expert systems newsletters and IT publications.
During my tenure in the IT department at Hanover, I witnessed for the first time the challenges that IT departments have in understanding their company’s business needs, in this case, processing insurance claims and performing underwriting, much less having to codify it all into COBOL systems. As it turns out, in addition to deploying the AION expert system, I thought it a good idea for the IT department to stay ahead of emerging technologies and began an effort to have the programmers build an engineering discipline into their systems analysis and software development efforts (what else could a computer engineer do to help an IT department?). For those who remember, these efforts were labeled computer-aided-software-engineering (CASE).
Within a year after attending a few of these CASE and expert system conferences, and luckily as the second AI Winter was just beginning, I started my first publishing company (on CASE and application development, not AI!). Since then I’ve built and sold 4 integrated publishing businesses (magazine, conferences and research).
A decade after leaving AI in the 1990’s, I re-entered the AI market, and co-founded what became the largest robotics consumer event in the U.S. (Robonexus). It was a interesting journey. After a meeting with Rodney Brooks at MIT, he welcomed me to meet the management team of iRobot, and we were off to the races.
At the first RoboNexus, our lead sponsor, iRobot, launched Roomba. It happened to also be held a few weeks after the Google IPO in 2004. I met Sergey Brin at our event and he seemed fascinated to meet with the robotics leaders at the dais. What I have read since about Sergey Brin and Larry Page should be no surprise. During that same year, Sergey Brin was quoted as saying: “If you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” (source) And according to Steve Jurvetson, successful venture capitalist at Draper Fisher Jurvetson, “Every time I talk about Google’s future with Larry Page, he argues that it will become an artificial intelligence.” 2005 (source)
I guess I saw another mini AI/robotics winter coming then, and sold the robotics publishing firm – Robotics Trends – a couple of years later. At the time I was also running a smaller telecommunications publishing business where we launched WiMAX World (in 2004 there was no WiMAX equipment, so we were a little ahead of the curve). Soon WiMAX World transitioned to 4G World and became the fastest growing telecomm event in the U.S. (which we internationalized and sold).
There was other new businesses and events along the way, you can check that out here.
All the vectors are aligning yet again for AI. Today we live in one big digital world. We’ve got more data that we can handle. The computational power of even our smallest computers perform tasks that weren’t possible only a few years ago, and quantum computing is on the horizon. The next rapid advances in computing will arguably take place in software. Today, AI market is one of the fastest growing business and technology markets. In all its flavors, it is finally beginning to leave the lab and enter the market. Even though the definition of AI remains open, forecasters are trying to quantify AI in dollar terms:
- By 2018, AI will be incorporated into about half of all apps developed, according to research firm IDC, and by 2020, savings fueled by A.I. — in reduced people costs and increased workflow efficiencies, for example — are expected to total an estimated $60B for U.S. enterprises. IDC estimates that AI platforms such as IBM Watson, Intel Saffron, Google Tensorflow and Microsoft Cortana will generate about $1.4 billion in revenue in 2016.
- Tractica forecasts cumulative revenue of $43.5 billion during the ten-year period from 2015 through 2024.
- Market Research Store predicts it will be $40 billion in the single year of 2022. That report predicts that healthcare and transportation will be AI’s primary industries.
- Reports and Reports believes the advertising and media, finance, and retail sectors will be the drivers (and the total market size will be just $5.05 billion in 2020).
When you learn about the rapid developments happening in the AI industry, I think you will be as convinced as we are that AI is an unstoppable force. Some form of AI is already touching your life in many ways, whether you know it or not, and whether you like it or not.
Our mission here at AI Trends, and in the other extensions of our business – we’ve launched AI World Conference & Expo, is straight-forward: we will help our readers stay on top of the most important business and technology advances in AI, with a primary emphasis on what you need to do to help your organization learn and prepare to harness everything that AI has to offer.
AI Trends now had 900+ original and curated articles, making the largest online media platform on the business and technology of AI. We also highlight for you the investments that continue to be made (over the past few years, according to Venture Scanner, there has been $4.4B invested in AI in over 980 firms) and when these start-ups are being acquired (there’s a massive battle going on between Google, Facebook, IBM, Amazon, Apple, Tesla and others for AI talent with dozens of acquisitions in the past year). We also take note that these same behemoths, who together have already committed billions more in AI research and funding, have also realized the need to open up AI knowledge to prevent the potential catastrophe of unintended consequences should only a few companies or countries hoard this advanced AI capability. 2015 was the year we heard warning cries from Elon Musk, Bill Gates, Steve Wozniak, Stephen Hawking and others on the threat that advanced AI may have in the future to humanity. So on top of business and technology issues, we are also cover ethics and social issues.
Over the next few months, with our partnership with Lux Research, and other leading researchers and journalists, we will also be adding our own key areas of original content and coverage. Check out aitrends.com/research and aitrends.com/webinars for additional original content.
Thank you for taking your time to read this welcome letter. I look forward to sharing this all with you, and meeting you at some point at one of our upcoming events.