The Future Today Institute’s 12th Annual Emerging Tech Trends Report — from futuretodayinstitute.com

Excerpts:

At the Future Today Institute, we identify emerging tech trends and map the future for our clients. This is FTI’s 12th annual Tech Trends Report, and in it we identify 315 tantalizing advancements in emerging technologies — artificial intelligence, biotech, autonomous robots, green energy and space travel — that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. As of the publication date, the annual FTI Tech Trend Report report has garnered more than 7.5 cumulative views.

Key findings for 2019 (emphasis DSC)

  • Privacy is dead. (DC: NOT GOOD!!! If this is true, can the situation be reversed?)
  • Voice Search Optimization (VSO) is the new SEO.
  • The Big Nine.
  • Personal data records are coming. (DC: Including cloud-based learner profiles I hope.)
  • China continues to ascend, and not just in artificial intelligence.
  • Lawmakers around the world are not prepared to deal with new challenges that arise from emerging science and technology.
  • Consolidation continues as a key theme for 2019.

 

 

Law schools escalate their focus on digital skills — from edtechmagazine.com by Eli Zimmerman
Coding, data analytics and device integration give students the tools to become more efficient lawyers.

Excerpt:

Participants learned to use analytics programs and artificial intelligence to complete work in a fraction of the time it usually takes.

For example, students analyzed contracts using AI programs to find errors and areas for improvement across various legal jurisdictions. In another exercise, students learned to use data programs to draft nondisclosure agreements in less than half an hour.

By learning analytics models, students will graduate with the skills to make them more effective — and more employable — professionals.

“As advancing technology and massive data sets enable lawyers to answer complex legal questions with greater speed and efficiency, courses like Legal Analytics will help KU Law students be better advocates for tomorrow’s clients and more competitive for tomorrow’s jobs,” Stephen Mazza, dean of the University of Kansas School of Law, tells Legaltech News.

 

Reflecting that shift, the Law School Admission Council, which organizes and distributes the Law School Admission Test, will be offering the test exclusively on Microsoft Surface Go tablets starting in July 2019.

 

From DSC:
I appreciate the article, thanks Eli. From one of the articles that was linked to, it appears that, “To facilitate the transition to the Digital LSAT starting July 2019, LSAC is procuring thousands of Surface Go tablets that will be loaded with custom software and locked down to ensure the integrity of the exam process and security of the test results.”

 

 

 

 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 

Microsoft built a chat bot to match patients to clinical trials — from fortune.com by Dina Bass

Excerpt:

A chat bot that began as a hackathon project at Microsoft’s lab in Israel makes it easier for sick patients to find clinical trials that could provide otherwise unavailable medicines and therapies.

The Clinical Trials Bot lets patients and doctors search for studies related to a disease and then answer a succession of text questions. The bot then suggests links to trials that best match the patients’ needs. Drugmakers can also use it to find test subjects.

 

Half of all clinical trials for new drugs and therapies never reach the number of patients needed to start, and many others are delayed for the same reason, Bitran said. Meanwhile patients, sometimes desperately sick, find it hard to comb through the roughly 50,000 trials worldwide and their arcane and lengthy criteria—typically 20 to 30 factors. Even doctors struggle to search quickly on behalf of patients, Bitran said.

 

 

 

Joint CS and Philosophy Initiative, Embedded EthiCS, Triples in Size to 12 Courses — from thecrimson.com by Ruth Hailu and Amy Jia

Excerpt:

The idea behind the Embedded EthiCS initiative arose three years ago after students in Grosz’s course, CS 108: “Intelligent Systems: Design and Ethical Challenges,” pushed for an increased emphasis on ethical reasoning within discussions surrounding technology, according to Grosz and Simmons. One student suggested Grosz reach out to Simmons, who also recognized the importance of an interdisciplinary approach to computer science.

“Not only are today’s students going to be designing technology in the future, but some of them are going to go into government and be working on regulation,” Simmons said. “They need to understand how [ethical issues] crop up, and they need to be able to identify them.”

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

India Just Swore in Its First Robot Police Officer — from futurism.com by Dan Robitzski
RoboCop, meet KP-Bot.

Excerpt:

RoboCop
India just swore in its first robotic police officer, which is named KP-Bot.

The animatronic-looking machine was granted the rank of sub-inspector on Tuesday, and it will operate the front desk of Thiruvananthapuram police headquarters, according to India Today.

 

 

From DSC:
Whoa….hmmm…note to the ABA and to the legal education field — and actually to anyone involved in developing laws — we need to catch up. Quickly.

My thoughts go to the governments and to the militaries around the globe. Are we now on a slippery slope? How far along are the militaries of the world in integrating robotics and AI into their weapons of war? Quite far, I think.

Also, at the higher education level, are Computer Science and Engineering Departments taking their responsibilities seriously in this regard? What kind of teaching is being done (or not done) in terms of the moral responsibilities of their code? Their robots?

 

 

 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 

AR will spark the next big tech platform — call it Mirrorworld — from wired.com by Kevin Kelly

Excerpt:

It is already under construction. Deep in the research labs of tech companies around the world, scientists and engineers are racing to construct virtual places that overlay actual places. Crucially, these emerging digital landscapes will feel real; they’ll exhibit what landscape architects call place­ness. The Street View images in Google Maps are just facades, flat images hinged together. But in the mirrorworld, a virtual building will have volume, a virtual chair will exhibit chairness, and a virtual street will have layers of textures, gaps, and intrusions that all convey a sense of “street.”

The mirrorworld—a term first popularized by Yale computer scientist David Gelernter—will reflect not just what something looks like but its context, meaning, and function. We will interact with it, manipulate it, and experience it like we do the real world.

 

Also see:
Google Maps in augmented reality points you in the right direction — from mashable.com by Sasha Lekach

 

 

Bobst launches augmented reality helpline — from proprint.com.au by Sheree Young

Excerpt:

Swiss packaging and label equipment supplier Bobst has launched a new augmented reality smart headset to help answer customer questions.

Rapid problem solving thanks to a new augmented reality helpline service introduced by Swiss packaging and label equipment supplier Bobst stands to save printers time and money, the company says.

The Helpline Plus AR innovation provides a remote assistance service to Bobst’s customers using a smart headset with augmented reality glasses. The technology is being gradually rolled out globally, Bobst says.

Customers can use the headset to contact technical experts and iron out any issues they may be having as well as receive real time advice and support.

 

 

 

LinkedIn 2019 Talent Trends: Soft Skills, Transparency and Trust — from linkedin.com by Josh Bersin

Excerpts:

This week LinkedIn released its 2019 Global Talent Trends research, a study that summarizes job and hiring data across millions of people, and the results are quite interesting. (5,165 talent and managers responded, a big sample.)

In an era when automation, AI, and technology has become more pervasive, important (and frightening) than ever, the big issue companies face is about people: how we find and develop soft skills, how we create fairness and transparency, and how we make the workplace more flexible, humane, and honest.

The most interesting part of this research is a simple fact: in today’s world of software engineering and ever-more technology, it’s soft skills that employers want. 91% of companies cited this as an issue and 80% of companies are struggling to find better soft skills in the market.

What is a “soft skill?” The term goes back twenty years when we had “hard skills” (engineering and science) so we threw everything else into the category of “soft.” In reality soft skills are all the human skills we have in teamwork, leadership, collaboration, communication, creativity, and person to person service. It’s easy to “teach” hard skills, but soft skills must be “learned.”

 

 

Also see:

Employers Want ‘Uniquely Human Skills’ — from campustechnology.com by Dian Schaffhauser

Excerpt:

According to 502 hiring managers and 150 HR decision-makers, the top skills they’re hunting for among new hires are:

  • The ability to listen (74 percent);
  • Attention to detail and attentiveness (70 percent);
  • Effective communication (69 percent);
  • Critical thinking (67 percent);
  • Strong interpersonal abilities (65 percent); and
  • Being able to keep learning (65 percent).
 

When the future comes to West Michigan, will we be ready?


 

UIX: When the future comes to West Michigan, will we be ready? — from rapidgrowthmedia.com by Matthew Russell

Excerpts (emphasis DSC):

“Here in the United States, if we were to personify things a bit, it’s almost like society is anxiously calling out to an older sibling (i.e., emerging technologies), ‘Heh! Wait up!!!'” Christian says. “This trend has numerous ramifications.”

Out of those ramifications, Christian names three main points that society will have to address to fully understand, make use of, and make practical, future technologies.

  1. The need for the legal/legislative side of the world to close the gap between what’s possible and what’s legal
  2. The need for lifelong learning and to reinvent oneself
  3. The need to make pulse-checking/futurism an essential tool in the toolbox of every member of the workforce today and in the future

 

When the future comes to West Michigan, will we be ready?

Photos by Adam Bird

 

From DSC:
The key thing that I was trying to relay in my contribution towards Matthew’s helpful article was that we are now on an exponential trajectory of technological change. This trend has ramifications for numerous societies around the globe, and it involves the legal realm as well. Hopefully, all of us in the workforce are coming to realize our need to be constantly pulse-checking the relevant landscapes around us. To help make that happen, each of us needs to be tapping into the appropriate “streams of content” that are relevant to our careers so that our knowledgebases are as up-to-date as possible. We’re all into lifelong learning now, right?

Along these lines, increasingly there is a need for futurism to hit the mainstream. That is, when the world is moving at 120+mph, the skills and methods that futurists follow must be better taught and understood, or many people will be broadsided by the changes brought about by emerging technologies. We need to better pulse-check the relevant landscapes, anticipate the oncoming changes, develop potential scenarios, and then design the strategies to respond to those potential scenarios.

 

 

Online curricula helps teachers tackle AI in the classroom — from educationdive.com by Lauren Barack

Dive Brief:

  • Schools may already use some form of artificial intelligence (AI), but hardly any have curricula designed to teach K-12 students how it works and how to use it, wrote EdSurge. However, organizations such as the International Society for Technology in Education (ISTE) are developing their own sets of lessons that teachers can take to their classrooms.
  • Members of “AI for K-12” — an initiative co-sponsored by the Association for the Advancement of Artificial Intelligence and the Computer Science Teachers Association — wrote in a paper that an AI curriculum should address five basic ideas:
    • Computers use sensors to understand what goes on around them.
    • Computers can learn from data.
    • With this data, computers can create models for reasoning.
    • While computers are smart, it’s hard for them to understand people’s emotions, intentions and natural languages, making interactions less comfortable.
    • AI can be a beneficial tool, but it can also harm society.
  • These kinds of lessons are already at play among groups including the Boys and Girls Club of Western Pennsylvania, which has been using a program from online AI curriculum site ReadyAI. The education company lent its AI-in-a-Box kit, which normally sells for $3,000, to the group so it could teach these concepts.

 

AI curriculum is coming for K-12 at last. What will it include? — from edsurge.com by Danielle Dreilinger

Excerpt:

Artificial intelligence powers Amazon’s recommendations engine, Google Translate and Siri, for example. But few U.S. elementary and secondary schools teach the subject, maybe because there are so few curricula available for students. Members of the “AI for K-12” work group wrote in a recent Association for the Advancement of Artificial Intelligence white paper that “unlike the general subject of computing, when it comes to AI, there is little guidance for teaching at the K-12 level.”

But that’s starting to change. Among other advances, ISTE and AI4All are developing separate curricula with support from General Motors and Google, respectively, according to the white paper. Lead author Dave Touretzky of Carnegie Mellon has developed his own curriculum, Calypso. It’s part of the “AI-in-a-Box” kit, which is being used by more than a dozen community groups and school systems, including Carter’s class.

 

 

 

 

Amazon has 10,000 employees dedicated to Alexa — here are some of the areas they’re working on — from businessinsider.com by Avery Hartmans

Summary (emphasis DSC):

  • Amazon’s vice president of Alexa, Steve Rabuchin, has confirmed that yes, there really are 10,000 Amazon employees working on Alexa and the Echo.
  • Those employees are focused on things like machine learning and making Alexa more knowledgeable.
  • Some employees are working on giving Alexa a personality, too.

 

 

From DSC:
How might this trend impact learning spaces? For example, I am interested in using voice to intuitively “drive” smart classroom control systems:

  • “Alexa, turn on the projector”
  • “Alexa, dim the lights by 50%”
  • “Alexa, open Canvas and launch my Constitutional Law I class”

 

 

 

Gartner survey shows 37% of organizations have implemented AI in some form — from gartner.com
Despite talent shortages, the percentage of enterprises employing AI grew 270% over the past four years

Excerpt:

The number of enterprises implementing artificial intelligence (AI) grew 270 percent in the past four years and tripled in the past year, according to the Gartner, Inc. 2019 CIO Survey. Results showed that organizations across all industries use AI in a variety of applications, but struggle with acute talent shortages.

 

The deployment of AI has tripled in the past year — rising from 25 percent in 2018 to 37 percent today. The reasons for this big jump is that AI capabilities have matured significantly and thus enterprises are more willing to implement the technology. “We still remain far from general AI that can wholly take over complex tasks, but we have now entered the realm of AI-augmented work and decision science — what we call ‘augmented intelligence,’” Mr. Howard added.

 

Key Findings from the “2019 CIO Survey: CIOs Have Awoken to the Importance of AI”

  • The percentage of enterprises deploying artificial intelligence (AI) has tripled in the past year.
  • CIOs picked AI as the top game-changer technology.
  • Enterprises use AI in a wide variety of applications.
  • AI suffers from acute talent shortages.

 

 

From DSC:
In this posting, I discussed an idea for a new TV show — a program that would be both entertaining and educational. So I suppose that this posting is a Part II along those same lines. 

The program that came to my mind at that time was a program that would focus on significant topics and issues within American society — offered up in a debate/presentation style format.

I had envisioned that you could have different individuals, groups, or organizations discuss the pros and cons of an issue or topic. The show would provide contact information for helpful resources, groups, organizations, legislators, etc. These contacts would be for learning more about a subject or getting involved with finding a solution for that problem.

OR

…as I revist that idea today…perhaps the show could feature humans versus an artificial intelligence such as IBM’s Project Debater:

 

 

Project Debater is the first AI system that can debate humans on complex topics. Project Debater digests massive texts, constructs a well-structured speech on a given topic, delivers it with clarity and purpose, and rebuts its opponent. Eventually, Project Debater will help people reason by providing compelling, evidence-based arguments and limiting the influence of emotion, bias, or ambiguity.

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian