Instructure: Plans to expand beyond Canvas LMS into machine learning and AI — from mfeldstein.com by Phill Hill

Excerpts:

On the same day as Instructure’s earnings call and release of FY2018 financial results, the company announced the acquisition of Portfolium for $43 million, a small startup focusing on “ePortfolio network, student-centered assessment, job matching capabilities, and academic and co-curricular pathways”.

Instructure now views itself as a company with a suite of products, and they are much more open to using corporate M&A to build this portfolio.

We already have analytical capabilities in our Canvas platform. I want to be really clear and delineate the difference between an analytics and reporting capability, and a machine learning and AI platform.

We have the most comprehensive database on the educational experience in the globe. So given that information that we have, no one else has those data assets at their fingertips to be able to develop those algorithms and predictive models.

What’s even more interesting and compelling is that we can take that information, correlate it across all sorts of universities, curricula, etc, and we can start making recommendations and suggestions to the student or instructor in how they can be more successful. Watch this video, read this passage, do problems 17-34 in this textbook, spend an extra two hours on this or that. When we drive student success, we impact things like retention, we impact the productivity of the teachers, and it’s a huge opportunity. That’s just one small example. Our DIG initiative, it is first and foremost a platform for ML and AI, and we will deliver and monetize it by offering different functional domains of predictive algorithms and insights. Maybe things like student success, retention, coaching and advising, career pathing, as well as a number of the other metrics that will help improve the value of an institution or connectivity across institutions.

 

 

 

 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

Getting smart about the future of AI — from technologyreview.com by MIT Technology Review Insights
Artificial intelligence is a primary driver of possibilities and promise as the Fourth Industrial Revolution unfolds.

Excerpts:

The Industrial Revolution conjures up images of steam engines, textile mills, and iron workers. This was a defining period during the late 18th and early 19th centuries, as society shifted from primarily agrarian to factory-based work. A second phase of rapid industrialization occurred just before World War I, driven by growth in steel and oil production, and the emergence of electricity.

Fast-forward to the 1980s, when digital electronics started having a deep impact on society—the dawning Digital Revolution. Building on that era is what’s called the Fourth Industrial Revolution. Like its predecessors, it is centered on technological advancements—this time it’s artificial intelligence (AI), autonomous machines, and the internet of things—but now the focus is on how technology will affect society and humanity’s ability to communicate and remain connected.

 

That’s what AI technologies represent in the current period of technological change. It is now critical to carefully consider the future of AI, what it will look like, the effect it will have on human life, and what challenges and opportunities will arise as it evolves.

 

 

See the full report here >>

 

 

Also see:

  • Where Next for AI In Business? An overview for C-level executives — from techrevolution.asia by Bernard Marr
    Excerpt:
    The AI revolution is now well underway. In finance, marketing, medicine and manufacturing, machines are learning to monitor and adapt to real-world inputs in order to operate more efficiently, without human intervention. In our everyday lives, AI kicks in whenever we search the internet, shop online or settle down on the sofa to watch Netflix or listen to Spotify. At this point, it’s safe to say that AI is no longer the preserve of science fiction, but has already changed our world in a huge number of different ways.So: what next? Well, the revolution is showing no signs of slowing down. Research indicates that businesses, encouraged by the initial results they have seen, are now planning on stepping up investment and deployment of AI.One of the most noticeable advances will be the ongoing “democratization” of AI. What this means, put simply, is that AI-enabled business tools will increasingly become available to all of us, no matter what jobs we do.

 

You’ll no longer need to be an expert in computer science to use AI to do your job efficiently – this is the “democratization” of AI and it’s a trend which will impact more and more businesses going forward.

 

 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 

AI bias: 9 questions leaders should ask — from enterprisersproject.com by Kevin Casey
Artificial intelligence bias can create problems ranging from bad business decisions to injustice. Use these questions to fight off potential biases in your AI systems.

Excerpt:

People questions to ask about AI bias
1. Who is building the algorithms?
2. Do your AI & ML teams take responsibility for how their work will be used?
3. Who should lead an organization’s effort to identify bias in its AI systems?
4. How is my training data constructed?

Data questions to ask about AI bias
5. Is the data set comprehensive?
6. Do you have multiple sources of data?

Management questions to ask about AI bias
7. What proportion of resources is appropriate for an organization to devote to assessing potential bias?
8. Have you thought deeply about what metrics you use to evaluate your work?
9. How can we test for bias in training data?

 

 

13 industries soon to be revolutionized by artificial intelligence — from forbes.com by the Forbes Technology Council

Excerpt:

Artificial intelligence (AI) and machine learning (ML) have a rapidly growing presence in today’s world, with applications ranging from heavy industry to education. From streamlining operations to informing better decision making, it has become clear that this technology has the potential to truly revolutionize how the everyday world works.

While AI and ML can be applied to nearly every sector, once the technology advances enough, there are many fields that are either reaping the benefits of AI right now or that soon will be. According to a panel of Forbes Technology Council members, here are 13 industries that will soon be revolutionized by AI.

 

 

 

 

Emerging technology trends can seem both elusive and ephemeral, but some become integral to business and IT strategies—and form the backbone of tomorrow’s technology innovation. The eight chapters of Tech Trends 2019 look to guide CIOs through today’s most promising trends, with an eye toward innovation and growth and a spotlight on emerging trends that may well offer new avenues for pursuing strategic ambitions.

 

 

When the future comes to West Michigan, will we be ready?


 

UIX: When the future comes to West Michigan, will we be ready? — from rapidgrowthmedia.com by Matthew Russell

Excerpts (emphasis DSC):

“Here in the United States, if we were to personify things a bit, it’s almost like society is anxiously calling out to an older sibling (i.e., emerging technologies), ‘Heh! Wait up!!!'” Christian says. “This trend has numerous ramifications.”

Out of those ramifications, Christian names three main points that society will have to address to fully understand, make use of, and make practical, future technologies.

  1. The need for the legal/legislative side of the world to close the gap between what’s possible and what’s legal
  2. The need for lifelong learning and to reinvent oneself
  3. The need to make pulse-checking/futurism an essential tool in the toolbox of every member of the workforce today and in the future

 

When the future comes to West Michigan, will we be ready?

Photos by Adam Bird

 

From DSC:
The key thing that I was trying to relay in my contribution towards Matthew’s helpful article was that we are now on an exponential trajectory of technological change. This trend has ramifications for numerous societies around the globe, and it involves the legal realm as well. Hopefully, all of us in the workforce are coming to realize our need to be constantly pulse-checking the relevant landscapes around us. To help make that happen, each of us needs to be tapping into the appropriate “streams of content” that are relevant to our careers so that our knowledgebases are as up-to-date as possible. We’re all into lifelong learning now, right?

Along these lines, increasingly there is a need for futurism to hit the mainstream. That is, when the world is moving at 120+mph, the skills and methods that futurists follow must be better taught and understood, or many people will be broadsided by the changes brought about by emerging technologies. We need to better pulse-check the relevant landscapes, anticipate the oncoming changes, develop potential scenarios, and then design the strategies to respond to those potential scenarios.

 

 

What does it say when a legal blockchain eBook has 1.7M views? — from legalmosaic.com by Mark A. Cohen

Excerpts (emphasis DSC):

Blockchain For Lawyers,” a recently-released eBook by Australian legal tech company Legaler, drew 1.7M views in two weeks. What does that staggering number say about blockchain, legal technology, and the legal industry? Clearly, blockchain is a hot legal topic, along with artificial intelligence (AI), and legal tech generally.

Legal practice and delivery are each changing. New practice areas like cryptocurrency, cybersecurity, and Internet law are emerging as law struggles to keep pace with the speed of business change in the digital age. Concurrently, several staples of traditional practice–research, document review, etc.– are becoming automated and/or no longer performed by law firm associates. There is more “turnover” of practice tasks, more reliance on machines and non-licensed attorneys to mine data and provide domain expertise used by lawyers, and more collaboration than ever before. The emergence of new industries demands that lawyers not only provide legal expertise in support of new areas but also that they possess intellectual agility to master them quickly. Many practice areas law students will encounter have yet to be created. That means that all lawyers will be required to be more agile than their predecessors and engage in ongoing training.

 

 

 

Amazon has 10,000 employees dedicated to Alexa — here are some of the areas they’re working on — from businessinsider.com by Avery Hartmans

Summary (emphasis DSC):

  • Amazon’s vice president of Alexa, Steve Rabuchin, has confirmed that yes, there really are 10,000 Amazon employees working on Alexa and the Echo.
  • Those employees are focused on things like machine learning and making Alexa more knowledgeable.
  • Some employees are working on giving Alexa a personality, too.

 

 

From DSC:
How might this trend impact learning spaces? For example, I am interested in using voice to intuitively “drive” smart classroom control systems:

  • “Alexa, turn on the projector”
  • “Alexa, dim the lights by 50%”
  • “Alexa, open Canvas and launch my Constitutional Law I class”

 

 

 

Gartner survey shows 37% of organizations have implemented AI in some form — from gartner.com
Despite talent shortages, the percentage of enterprises employing AI grew 270% over the past four years

Excerpt:

The number of enterprises implementing artificial intelligence (AI) grew 270 percent in the past four years and tripled in the past year, according to the Gartner, Inc. 2019 CIO Survey. Results showed that organizations across all industries use AI in a variety of applications, but struggle with acute talent shortages.

 

The deployment of AI has tripled in the past year — rising from 25 percent in 2018 to 37 percent today. The reasons for this big jump is that AI capabilities have matured significantly and thus enterprises are more willing to implement the technology. “We still remain far from general AI that can wholly take over complex tasks, but we have now entered the realm of AI-augmented work and decision science — what we call ‘augmented intelligence,’” Mr. Howard added.

 

Key Findings from the “2019 CIO Survey: CIOs Have Awoken to the Importance of AI”

  • The percentage of enterprises deploying artificial intelligence (AI) has tripled in the past year.
  • CIOs picked AI as the top game-changer technology.
  • Enterprises use AI in a wide variety of applications.
  • AI suffers from acute talent shortages.

 

 

Facebook’s ’10 year’ challenge is just a harmless meme — right? — from wired.com by Kate O’Neill

Excerpts:

But the technology raises major privacy concerns; the police could use the technology not only to track people who are suspected of having committed crimes, but also people who are not committing crimes, such as protesters and others whom the police deem a nuisance.

It’s tough to overstate the fullness of how technology stands to impact humanity. The opportunity exists for us to make it better, but to do that we also must recognize some of the ways in which it can get worse. Once we understand the issues, it’s up to all of us to weigh in.

 

From DSC:
In this posting, I discussed an idea for a new TV show — a program that would be both entertaining and educational. So I suppose that this posting is a Part II along those same lines. 

The program that came to my mind at that time was a program that would focus on significant topics and issues within American society — offered up in a debate/presentation style format.

I had envisioned that you could have different individuals, groups, or organizations discuss the pros and cons of an issue or topic. The show would provide contact information for helpful resources, groups, organizations, legislators, etc. These contacts would be for learning more about a subject or getting involved with finding a solution for that problem.

OR

…as I revist that idea today…perhaps the show could feature humans versus an artificial intelligence such as IBM’s Project Debater:

 

 

Project Debater is the first AI system that can debate humans on complex topics. Project Debater digests massive texts, constructs a well-structured speech on a given topic, delivers it with clarity and purpose, and rebuts its opponent. Eventually, Project Debater will help people reason by providing compelling, evidence-based arguments and limiting the influence of emotion, bias, or ambiguity.

 

 

 

Top six AI and automation trends for 2019 — from forbes.com by Daniel Newman

Excerpt:

If your company hasn’t yet created a plan for AI and automation throughout your enterprise, you have some work to do. Experts believe AI will add nearly $16 trillion to the global economy by 2030, and 20 % of companies surveyed are already planning to incorporate AI throughout their companies next year. As 2018 winds down, now is the time to take a look at some trends and predictions for AI and automation that I believe will dominate the headlines in 2019—and to think about how you may incorporate them into your own company.

 

Also see — and an insert here from DSC:

Kai-Fu has a rosier picture than I do in regards to how humanity will be impacted by AI. One simply needs to check out today’s news to see that humans have a very hard time creating unity, thinking about why businesses exist in the first place, and being kind to one another…

 

 

 

How AI can save our humanity 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian