McKinsey’s State Of Machine Learning & AI, 2017 — from forbes.com by Louis Columbus

Excerpts:

These and other findings are from the McKinsey Global Institute Study, and discussion paper, Artificial Intelligence, The Next Digital Frontier (80 pp., PDF, free, no opt-in) published last month. McKinsey Global Institute published an article summarizing the findings titled   How Artificial Intelligence Can Deliver Real Value To Companies. McKinsey interviewed more than 3,000 senior executives on the use of AI technologies, their companies’ prospects for further deployment, and AI’s impact on markets, governments, and individuals.  McKinsey Analytics was also utilized in the development of this study and discussion paper.

 

Video: 4 FAQs about Watson as tutor — from er.educause.edu by Satya Nitta

Excerpt:

How is IBM using Watson’s intelligent tutoring system? So we are attempting to mimic the best practices of human tutoring. The gold standard will always remain one on one human to human tutoring. The whole idea here is an intelligent tutoring system as a computing system that works autonomously with learners, so there is no human intervention. It’s basically pretending to be the teacher itself and it’s working with the learner. What we’re attempting to do is we’re attempting to basically put conversational systems, systems that understand human conversation and dialogue, and we’re trying to build a system that, in a very natural way, interacts with people through conversation. The system basically has the ability to ask questions, to answer questions, to know who you are and where you are in your learning journey, what you’re struggling with, what you’re strong on and it will personalize its pedagogy to you.

There’s a natural language understanding system and a machine learning system that’s trying to figure out where you are in your learning journey and what the appropriate intervention is for you. The natural language system enables this interaction that’s very rich and conversation-based, where you can basically have a human-like conversation with it and, to a large extent, it will try to understand and to retrieve the right things for you. Again the most important thing is that we will set the expectations appropriately and we have appropriate exit criteria for when the system doesn’t actually understand what you’re trying to do.

 

 

 

Chatbot lawyer, which contested £7.2M in parking tickets, now offers legal help for 1,000+ topics — from arstechnica.co.uk by Sebastian Anthony
DoNotPay has expanded to cover the UK and all 50 US states. Free legal help for everyone!

Excerpt:

In total, DoNotPay now has over 1,000 separate chatbots that generate formal-sounding documents for a range of basic legal issues, such as seeking remuneration for a delayed flight or train, reporting discrimination, or asking for maternity leave. If you divide that by 51 (US and UK) you get a rough idea of how many different topics are covered. Each bot had to be hand-crafted by the British creator Joshua Browder, with the assistance of part-time and volunteer lawyers to ensure that the the documents are actually fit for purpose.

 

 

British student’s free robot lawyer can fight speeding tickets and rogue landlords — from telegraph.co.uk by Cara McGoogan

Excerpt:

A free “robot lawyer” that has overturned thousands of parking tickets in the UK can now fight rogue landlords, speeding tickets and harassment at work.

Joshua Browder, the 20-year-old British student who created the aide, has upgraded the robot’s abilities so it can fight legal disputes in 1,000 different areas. These include fighting landlords over security deposits and house repairs, and helping people report fraud to their credit card agency.

To get robot advice, users type their problem into the DoNotPay site and it directs them to a chat bot that can solve their particular legal issue. It can draft letters and offer advice on problems from credit card fraud to airline compensation.

 

 

Free robot lawyer helps low-income people tackle more than 1,000 legal issues — from mashable.com by Katie Dupere

Excerpt:

Shady businesses, you’re on notice. This robot lawyer is coming after you if you play dirty.

Noted legal aid chatbot DoNotPay just announced a massive expansion, which will help users tackle issues in 1,000 legal areas entirely for free. The new features, which launched on Wednesday, cover consumer and workplace rights, and will be available in all 50 states and the UK.

While the bot will still help drivers contest parking tickets and refugees apply for asylum, the service will now also help those who want to report harassment in the workplace or who simply want a refund on a busted toaster.

 

 



From DSC:
Whereas this type of bot is meant for external communications/assistance, we should also watch for Work Bots within an organization — dishing up real-time answers to questions that employees have about a variety of topics. I think that’s the next generation of technical communications, technical/help desk support, as well as training and development groups (at least some of the staff in those departments will likely be building these types of bots).



 

Addendum on 7/15/17:

LawGeex: Contract Review Automation

Excerpt (emphasis DSC):

The LawGeex Contract Review Automation enables anyone in your business to easily submit and receive approvals on contracts without waiting for the legal team. Our A.I. technology reads, reviews and understands your contracts, approving those that meet your legal team’s pre-defined criteria, and escalating those that don’t. Legal can maintain control and mitigate risk while giving other departments the freedom they need to get business moving.

 

 

Intel seen losing to Nvidia amid ‘tectonic shift’ in technology — from marketwatch.com by Jeremy Owens

Excerpts (emphasis DSC):

Computing is undergoing a massive shift, and the company known for making the brains behind many of the world’s computers and servers has not shifted as fast as competitors.

Jefferies equity analyst Mark Lipacis came to that conclusion Monday, reporting in a note that Intel Corp. stands to take a hit in its data-center business amid a move to a new computing paradigm focused on artificial intelligence and connected devices that he believes represents a “tectonic shift” in technology. Instead, Nvidia Corp. is best-positioned to be the chip leader in the new landscape, Lipacis wrote.

Lipacis’s thesis on the semiconductor industry is that computing paradigms undergo dramatic shifts roughly every 15 years, with mainframe-focused technology giving way to minicomputers and then personal computers, and later to mobile phones and cloud data-center architecture. While Intel was a dominant player in the second and third epochs of the computing era, with its chips finding a home in PCs and data-center servers, Lipacis believes the current shift to parallel processing and the so-called Internet of Things will belong to different chip makers.

“We believe we are at the start of the fourth tectonic shift now, to a parallel processing/IoT model, driven by lower memory costs, free data storage, improvements in parallel processing hardware and software, and improvements in AI technologies like neural networking, that make it easy to monetize all the data that is being stored,” he wrote.

 

 

The case for a next generation learning platform [Grush & Christian]

 

The case for a next generation learning platform — from campustechnology.com by Mary Grush & Daniel Christian

Excerpt (emphasis DSC):

Grush: Then what are some of the implications you could draw from metrics like that one?

Christian: As we consider all the investment in those emerging technologies, the question many are beginning to ask is, “How will these technologies impact jobs and the makeup of our workforce in the future?”

While there are many thoughts and questions regarding the cumulative impact these technologies will have on our future workforce (e.g., “How many jobs will be displaced?”), the consensus seems to be that there will be massive change.

Whether our jobs are completely displaced or if we will be working alongside robots, chatbots, workbots, or some other forms of AI-backed personal assistants, all of us will need to become lifelong learners — to be constantly reinventing ourselves. This assertion is also made in the aforementioned study from McKinsey: “AI promises benefits, but also poses urgent challenges that cut across firms, developers, government, and workers. The workforce needs to be re-skilled to exploit AI rather than compete with it…”

 

 

A side note from DSC:
I began working on this vision prior to 2010…but I didn’t officially document it until 2012.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:

A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • A customizable learning environment that will offer up-to-date streams of regularly curated content (i.e., microlearning) as well as engaging learning experiences
  • Along these lines, a lifelong learner can opt to receive an RSS feed on a particular topic until they master that concept; periodic quizzes (i.e., spaced repetition) determines that mastery. Once mastered, the system will ask the learner whether they still want to receive that particular stream of content or not.
  • A Netflix-like interface to peruse and select plugins to extend the functionality of the core product
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and streams of content that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
  • (Potentially) Integration with one-on-one tutoring services

Further details here >>

 

 

 



Addendum from DSC (regarding the resource mentioned below):
Note the voice recognition/control mechanisms on Westinghouse’s new product — also note the integration of Amazon’s Alexa into a “TV.”



 

Westinghouse’s Alexa-equipped Fire TV Edition smart TVs are now available — from theverge.com by Chaim Gartenberg

 

The key selling point, of course, is the built-in Amazon Fire TV, which is controlled with the bundled Voice Remote and features Amazon’s Alexa assistant.

 

 

 

Finally…also see:

  • NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota
  • Holographic storytelling — from jwtintelligence.com
    The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.
    New Dimensions in Testimony is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.  Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from the USC Institute for Creative Technologies (ICT), people are able to ask Gutter’s projected image questions that trigger relevant responses.

 

 

 

 

The Internet’s future is more fragile than ever, says one of its inventors — from fastcompany.com by Sean Captain
Vint Cerf, the co-creator of tech that makes the internet work, worries about hacking, fake news, autonomous software, and perishable digital history.

Excerpts:

The term “digital literacy” is often referred to as if you can use a spreadsheet or a text editor. But I think digital literacy is closer to looking both ways before you cross the street. It’s a warning to think about what you’re seeing, what you’re hearing, what you’re doing, and thinking critically about what to accept and reject . . . Because in the absence of this kind of critical thinking, it’s easy to see how the phenomena that we’re just now labeling fake news, alternative facts [can come about]. These [problems] are showing up, and they’re reinforced in social media.

What are the criteria that we should apply to devices that are animated by software, and which we rely upon without intervention? And this is the point where autonomous software becomes a concern, because we turn over functionality to a piece of code. And dramatic examples of that are self-driving cars . . . Basically you’re relying on software doing the right things, and if it doesn’t do the right thing, you have very little to say about it.

I feel like we’re moving into a kind of fragile future right now that we should be much more thoughtful about improving, that is to say making more robust.

 

 

Imagine a house that stops working when the internet connection goes away. That’s not acceptable.

 

 

 

 

Everyday Life in the Future — from hpmegatrends.com by Andrew Bolwell

 

Technology will play an increasingly vital role in our lives as we move into the future. Four major Megatrends — Rapid Urbanization, Changing Demographics, Hyper Globalization, and Accelerated Innovation — will have a sustained and transformative impact on businesses, societies, economies, cultures, and our personal lives.

 

 

 

From DSC:
With the ever increasing usage of artificial intelligence, algorithms, robotics, and automation, people are going to need to reinvent themselves quickly, cost-effectively, and conveniently. As such, we had better begin working immediately on a next generation learning platform — before the other tidal waves start hitting the beach. “What do you mean by saying ‘other tidal waves’ — what tidal waves are you talking about anyway?” one might ask.

Well….here’s one for you:


 

 

New Report Predicts Over 100,000 Legal Jobs Will Be Lost To Automation — from futurism.com by Jelor Gallego
An extensive new analysis by Deloitte estimates that over 100,000 jobs will be lost to technological automation within the next two decades. Increasing technological advances have helped replace menial roles in the office and do repetitive tasks

 


From DSC:
I realize that not all of this is doom and gloom. There will be jobs lost and there will be jobs gained. A point also made by MIT futurists Andrew McAfee and Erik Brynjolfsson in a recent podcast entitled, “
Want to stay relevant? Then listen up(in which they explain the momentous technological changes coming next–and what you can do to harness them).

But the point is that massive reinvention is going to be necessary. Traditional institutions of higher education — as well as the current methods of accreditation — are woefully inadequate to address the new, exponential pace of change.

 

 

 


 

Here’s my take on what it’s going to take to deliver constantly up-to-date streams of relevant content at an incredibly affordable price.

 


 

 

 
 

Winner takes all — from by Michael Moe, Luben Pampoulov, Li Jiang, Nick Franco, & Suzee Han

 

We did a lot of things that seemed crazy at the time. Many of those crazy things now have over a billion users, like Google Maps, YouTube, Chrome, and Android.

— Larry Page, CEO, Alphabet

 

 

Excerpt:

An alphabet is a collection of letters that represent language. Alphabet, accordingly, is a collection of companies that represent the many bets Larry Page is making to ensure his platform is built to not only survive, but to thrive in a future defined by accelerating digital disruption. It’s an “Alpha” bet on a diversified platform of assets.

If you look closely, the world’s top technology companies are making similar bets.

 


 

 

Technology in general and the Internet in particular is all about a disproportionate gains to the leader in a category. Accordingly, as technology leaders like Facebook, Alphabet, and Amazon survey the competitive landscape, they have increasingly aimed to develop and acquire emerging technology capabilities across a broad range of complementary categories.

 

 

 

Introducing Deep Learning and Neural Networks — Deep Learning for Rookies — from medium.com by Nahua Kang

Excerpts:

Here’s a short list of general tasks that deep learning can perform in real situations:

  1. Identify faces (or more generally image categorization)
  2. Read handwritten digits and texts
  3. Recognize speech (no more transcribing interviews yourself)
  4. Translate languages
  5. Play computer games
  6. Control self-driving cars (and other types of robots)

And there’s more. Just pause for a second and imagine all the things that deep learning could achieve. It’s amazing and perhaps a bit scary!

Currently there are already many great courses, tutorials, and books on the internet covering this topic, such as (not exhaustive or in specific order):

  1. Michael Nielsen’s Neural Networks and Deep Learning
  2. Geoffrey Hinton’s Neural Networks for Machine Learning
  3. Goodfellow, Bengio, & Courville’s Deep Learning
  4. Ian Trask’s Grokking Deep Learning,
  5. Francois Chollet’s Deep Learning with Python
  6. Udacity’s Deep Learning Nanodegree (not free but high quality)
  7. Udemy’s Deep Learning A-Z ($10–$15)
  8. Stanford’s CS231n and CS224n
  9. Siraj Raval’s YouTube channel

The list goes on and on. David Venturi has a post for freeCodeCamp that lists many more resources. Check it out here.

 

 

 

 

 

When AI can transcribe everything — from theatlantic.com by Greg Noone
Tech companies are rapidly developing tools to save people from the drudgery of typing out conversations—and the impact could be profound.

Excerpt:

Despite the recent emergence of browser-based transcription aids, transcription’s an area of drudgery in the modern Western economy where machines can’t quite squeeze human beings out of the equation. That is until last year, when Microsoft built one that could.

Automatic speech recognition, or ASR, is an area that has gripped the firm’s chief speech scientist, Xuedong Huang, since he entered a doctoral program at Scotland’s Edinburgh University. “I’d just left China,” he says, remembering the difficulty he had in using his undergraduate knowledge of the American English to parse the Scottish brogue of his lecturers. “I wished every lecturer and every professor, when they talked in the classroom, could have subtitles.”

“That’s the thing with transcription technology in general,” says Prenger. “Once the accuracy gets above a certain bar, everyone will probably start doing their transcriptions that way, at least for the first several rounds.” He predicts that, ultimately, automated transcription tools will increase both the supply of and the demand for transcripts. “There could be a virtuous circle where more people expect more of their audio that they produce to be transcribed, because it’s now cheaper and easier to get things transcribed quickly. And so, it becomes the standard to transcribe everything.”

 

 

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

Connecting more Americans with jobs — from blog.google by Nick Zakrasek

Excerpt:

We have a long history of using our technology to connect people with crucial information. At I/O, we announced Google for Jobs, a company-wide initiative focused on helping both job seekers and employers, through deep collaboration with the job matching industry. This effort includes the Cloud Jobs API, announced last year, which provides access to Google’s machine learning capabilities to power smarter job search and recommendations within career sites, jobs boards, and other job matching sites and apps. Today, we’re taking the next step in the Google for Jobs initiative by putting the convenience and power of Search into the hands of job seekers. With this new experience, we aim to connect Americans to job opportunities across the U.S., so no matter who you are or what kind of job you’re looking for, you can find job postings that match your needs.

 

 

How to Use Google for Jobs to Rock Your Career — from avidcareerist.com by Donna Svei

Excerpt:

How Does Google for Jobs Work?
Let me walk you through an example.

Go to your Google search bar.
Enter your preferred job title, followed by the word jobs, and your preferred location. Like this:

 

 

Google launches its AI-powered jobs search engine — from techcrunch.com by Frederic Lardinois

Excerpt:

Looking for a new job is getting easier. Google today launched a new jobs search feature right on its search result pages that lets you search for jobs across virtually all of the major online job boards like LinkedIn, Monster, WayUp, DirectEmployers, CareerBuilder and Facebook and others. Google will also include job listings its finds on a company’s homepage.

The idea here is to give job seekers an easy way to see which jobs are available without having to go to multiple sites only to find duplicate postings and lots of irrelevant jobs.

 

 

Google for Jobs Could Save You Time on Your Next Job Search — from lifehacker.comby Patrick Allan

Excerpt:

Google launched its new Google for Jobs feature today, which uses their machine learning Cloud API to put job listings from all the major job service sites in one easy-to-search place.

 

 

 

 

An Artificial Intelligence Developed Its Own Non-Human Language — from theatlantic.com by Adrienne LaFrance
When Facebook designed chatbots to negotiate with one another, the bots made up their own way of communicating.

Excerpt:

In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.

In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language. If this doesn’t fill you with a sense of wonder and awe about the future of machines and humanity then, I don’t know, go watch Blade Runner or something.

 

 

 
© 2025 | Daniel Christian