From DSC:
In this posting, I discussed an idea for a new TV show — a program that would be both entertaining and educational. So I suppose that this posting is a Part II along those same lines. 

The program that came to my mind at that time was a program that would focus on significant topics and issues within American society — offered up in a debate/presentation style format.

I had envisioned that you could have different individuals, groups, or organizations discuss the pros and cons of an issue or topic. The show would provide contact information for helpful resources, groups, organizations, legislators, etc. These contacts would be for learning more about a subject or getting involved with finding a solution for that problem.

OR

…as I revist that idea today…perhaps the show could feature humans versus an artificial intelligence such as IBM’s Project Debater:

 

 

Project Debater is the first AI system that can debate humans on complex topics. Project Debater digests massive texts, constructs a well-structured speech on a given topic, delivers it with clarity and purpose, and rebuts its opponent. Eventually, Project Debater will help people reason by providing compelling, evidence-based arguments and limiting the influence of emotion, bias, or ambiguity.

 

 

 

Top six AI and automation trends for 2019 — from forbes.com by Daniel Newman

Excerpt:

If your company hasn’t yet created a plan for AI and automation throughout your enterprise, you have some work to do. Experts believe AI will add nearly $16 trillion to the global economy by 2030, and 20 % of companies surveyed are already planning to incorporate AI throughout their companies next year. As 2018 winds down, now is the time to take a look at some trends and predictions for AI and automation that I believe will dominate the headlines in 2019—and to think about how you may incorporate them into your own company.

 

Also see — and an insert here from DSC:

Kai-Fu has a rosier picture than I do in regards to how humanity will be impacted by AI. One simply needs to check out today’s news to see that humans have a very hard time creating unity, thinking about why businesses exist in the first place, and being kind to one another…

 

 

 

How AI can save our humanity 

 

 

 

Guide to how artificial intelligence can change the world – Part 3 — from intelligenthq.com by Maria Fonseca and Paula Newton
This is part 3 of a Guide in 4 parts about Artificial Intelligence. The guide covers some of its basic concepts, history and present applications, possible developments in the future, and also its challenges as opportunities.

Excerpt:

Artificial intelligence is considered to be anything that gives machines intelligence which allows them to reason in the way that humans can. Machine learning is an element of artificial intelligence which is when machines are programmed to learn. This is brought about through the development of algorithms that work to find patterns, trends and insights from data that is input into them to help with decision making. Deep learning is in turn an element of machine learning. This is a particularly innovative and advanced area of artificial intelligence which seeks to try and get machines to both learn and think like people.

 

Also see:

 

Also see:

LinkedIn’s 2018 U.S. emerging jobs report — from economicgraph.linkedin.com

Excerpt (emphasis DSC):

Our biggest takeaways from this year’s Emerging Jobs Report:

  • Artificial Intelligence (AI) is here to stay. No, this doesn’t mean robots are coming for your job, but we are likely to see continued growth in fields and functions related to AI. This year, six out of the 15 emerging jobs are related in some way to AI, and our research shows that skills related to AI are starting to infiltrate every industry, not just tech. In fact, AI skills are among the fastest-growing skills on LinkedIn, and globally saw a 190% increase from 2015 to 2017.

 

 

All automated hiring software is prone to bias by default — from technologyreview.com

Excerpt:

new report out from nonprofit Upturn analyzed some of the most prominent hiring algorithms on the market and found that by default, such algorithms are prone to bias.

The hiring steps: Algorithms have been made to automate four primary stages of the hiring process: sourcing, screening, interviewing, and selection. The analysis found that while predictive tools were rarely deployed to make that final choice on who to hire, they were commonly used throughout these stages to reject people.

 

“Because there are so many different points in that process where biases can emerge, employers should definitely proceed with caution,” says Bogen. “They should be transparent about what predictive tools they are using and take whatever steps they can to proactively detect and address biases that arise—and if they can’t confidently do that, they should pull the plug.”

 

 

 

Forecast 5.0 – The Future of Learning: Navigating the Future of Learning  — from knowledgeworks.org by Katherine Prince, Jason Swanson, and Katie King
Discover how current trends could impact learning ten years from now and consider ways to shape a future where all students can thrive.

 

 

 

AI Now Report 2018 | December 2018  — from ainowinstitute.org

Meredith Whittaker , AI Now Institute, New York University, Google Open Research
Kate Crawford , AI Now Institute, New York University, Microsoft Research
Roel Dobbe , AI Now Institute, New York University
Genevieve Fried , AI Now Institute, New York University
Elizabeth Kaziunas , AI Now Institute, New York University
Varoon Mathur , AI Now Institute, New York University
Sarah Myers West , AI Now Institute, New York University
Rashida Richardson , AI Now Institute, New York University
Jason Schultz , AI Now Institute, New York University School of Law
Oscar Schwartz , AI Now Institute, New York University

With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York University)

Excerpt (emphasis DSC):

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
  3. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
  6.  Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
  10. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Also see:

After a Year of Tech Scandals, Our 10 Recommendations for AI — from medium.com by the AI Now Institute
Let’s begin with better regulation, protecting workers, and applying “truth in advertising” rules to AI

 

Also see:

Excerpt:

As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

 

From DSC:
This is a major heads up to the American Bar Association (ABA), law schools, governments, legislatures around the country, the courts, the corporate world, as well as for colleges, universities, and community colleges. The pace of emerging technologies is much faster than society’s ability to deal with them! 

The ABA and law schools need to majorly pick up their pace — for the benefit of all within our society.

 

 

 

Alexa, get me the articles (voice interfaces in academia) — from blog.libux.co by Kelly Dagan

Excerpt:

Credit to Jill O’Neill, who has written an engaging consideration of applications, discussions, and potentials for voice-user interfaces in the scholarly realm. She details a few use case scenarios: finding recent, authoritative biographies of Jane Austen; finding if your closest library has an item on the shelf now (and whether it’s worth the drive based on traffic).

Coming from an undergraduate-focused (and library) perspective, I can think of a few more:

  • asking if there are any group study rooms available at 7 pm and making a booking
  • finding out if [X] is open now (Archives, the Cafe, the Library, etc.)
  • finding three books on the Red Brigades, seeing if they are available, and saving the locations
  • grabbing five research articles on stereotype threat, to read later

 

Also see:

 

 

 

These news anchors are professional and efficient. They’re also not human. — from washingtonpost.com by Taylor Telford

Excerpt:

The new anchors at China’s state-run news agency have perfect hair and no pulse.

Xinhua News just unveiled what it is calling the world’s first news anchors powered by artificial intelligence, at the World Internet Conference on Wednesday in China’s Zhejiang province. From the outside, they are almost indistinguishable from their human counterparts, crisp-suited and even-keeled. Although Xinhua says the anchors have the “voice, facial expressions and actions of a real person,” the robotic anchors relay whatever text is fed to them in stilted speech that sounds less human than Siri or Alexa.

 

From DSC:
The question is…is this what we want our future to look like? Personally, I don’t care to watch a robotic newscaster giving me the latest “death and dying report.” It comes off bad enough — callous enough — from human beings backed up by TV networks/stations that have agendas of their own; let alone from a robot run by AI.

 

 

From DSC:
I have often reflected on differentiation or what some call personalized learning and/or customized learning. How does a busy teacher, instructor, professor, or trainer achieve this, realistically?

It’s very difficult and time-consuming to do for sure. But it also requires a team of specialists to achieve such a holy grail of learning — as one person can’t know it all. That is, one educator doesn’t have the necessary time, skills, or knowledge to address so many different learning needs and levels!

  • Think of different cognitive capabilities — from students that have special learning needs and challenges to gifted students
  • Or learners that have different physical capabilities or restrictions
  • Or learners that have different backgrounds and/or levels of prior knowledge
  • Etc., etc., etc.

Educators  and trainers have so many things on their plates that it’s very difficult to come up with _X_ lesson plans/agendas/personalized approaches, etc.  On the other side of the table, how do students from a vast array of backgrounds and cognitive skill levels get the main points of a chapter or piece of text? How can they self-select the level of difficulty and/or start at a “basics” level and work one’s way up to harder/more detailed levels if they can cognitively handle that level of detail/complexity? Conversely, how do I as a learner get the boiled down version of a piece of text?

Well… just as with the flipped classroom approach, I’d like to suggest that we flip things a bit and enlist teams of specialists at the publishers to fulfill this need. Move things to the content creation end — not so much at the delivery end of things. Publishers’ teams could play a significant, hugely helpful role in providing customized learning to learners.

Some of the ways that this could happen:

Use an HTML like language when writing a textbook, such as:

<MainPoint> The text for the main point here. </MainPoint>

<SubPoint1>The text for the subpoint 1 here.</SubPoint1>

<DetailsSubPoint1>More detailed information for subpoint 1 here.</DetailsSubPoint1>

<SubPoint2>The text for the subpoint 2 here.</SubPoint2>

<DetailsSubPoint2>More detailed information for subpoint 2 here.</DetailsSubPoint2>

<SubPoint3>The text for the subpoint 3 here.</SubPoint3>

<DetailsSubPoint3>More detailed information for subpoint 3 here.</DetailsSubPoint1>

<SummaryOfMainPoints>A list of the main points that a learner should walk away with.</SummaryOfMainPoints>

<BasicsOfMainPoints>Here is a listing of the main points, but put in alternative words and more basic ways of expressing those main points. </BasicsOfMainPoints>

<Conclusion> The text for the concluding comments here.</Conclusion>

 

<BasicsOfMainPoints> could be called <AlternativeExplanations>
Bottom line: This tag would be to put things forth using very straightforward terms.

Another tag would be to address how this topic/chapter is relevant:
<RealWorldApplication>This short paragraph should illustrate real world examples

of this particular topic. Why does this topic matter? How is it relevant?</RealWorldApplication>

 

On the students’ end, they could use an app that works with such tags to allow a learner to quickly see/review the different layers. That is:

  • Show me just the main points
  • Then add on the sub points
  • Then fill in the details
    OR
  • Just give me the basics via an alternative ways of expressing these things. I won’t remember all the details. Put things using easy-to-understand wording/ideas.

 

It’s like the layers of a Microsoft HoloLens app of the human anatomy:

 

Or it’s like different layers of a chapter of a “textbook” — so a learner could quickly collapse/expand the text as needed:

 

This approach could be helpful at all kinds of learning levels. For example, it could be very helpful for law school students to obtain outlines for cases or for chapters of information. Similarly, it could be helpful for dental or medical school students to get the main points as well as detailed information.

Also, as Artificial Intelligence (AI) grows, the system could check a learner’s cloud-based learner profile to see their reading level or prior knowledge, any IEP’s on file, their learning preferences (audio, video, animations, etc.), etc. to further provide a personalized/customized learning experience. 

To recap:

  • “Textbooks” continue to be created by teams of specialists, but add specialists with knowledge of students with special needs as well as for gifted students. For example, a team could have experts within the field of Special Education to help create one of the overlays/or filters/lenses — i.e., to reword things. If the text was talking about how to hit a backhand or a forehand, the alternative text layer could be summed up to say that tennis is a sport…and that a sport is something people play. On the other end of the spectrum, the text could dive deeply into the various grips a person could use to hit a forehand or backhand.
  • This puts the power of offering differentiation at the point of content creation/development (differentiation could also be provided for at the delivery end, but again, time and expertise are likely not going to be there)
  • Publishers create “overlays” or various layers that can be turned on or off by the learners
  • Can see whole chapters or can see main ideas, topic sentences, and/or details. Like HTML tags for web pages.
  • Can instantly collapse chapters to main ideas/outlines.

 

 

Robots won’t replace instructors, 2 Penn State educators argue. Instead, they’ll help them be ‘more human.’ — from edsurge.com by Tina Nazerian

Excerpt:

Specifically, it will help them prepare for and teach their courses through several phases—ideation, design, assessment, facilitation, reflection and research. The two described a few prototypes they’ve built to show what that might look like.

 

Also see:

The future of education: Online, free, and with AI teachers? — from fool.com by Simon Erickson
Duolingo is using artificial intelligence to teach 300 million people a foreign language for free. Will this be the future of education?

Excerpts:

While it might not get a lot of investor attention, education is actually one of America’s largest markets.

The U.S. has 20 million undergraduates enrolled in colleges and universities right now and another 3 million enrolled in graduate programs. Those undergrads paid an average of $17,237 for tuition, room, and board at public institutions in the 2016-17 school year and $44,551 for private institutions. Graduate education varies widely by area of focus, but the average amount paid for tuition alone was $24,812 last year.

Add all of those up, and America’s students are paying more than half a trillion dollars each year for their education! And that doesn’t even include the interest amassed for student loans, the college-branded merchandise, or all the money spent on beer and coffee.

Keeping the costs down
Several companies are trying to find ways to make college more affordable and accessible.

 

But after we launched, we have so many users that nowadays if the system wants to figure out whether it should teach plurals before adjectives or adjectives before plurals, it just runs a test with about 50,000 people. So for the next 50,000 people that sign up, which takes about six hours for 50,000 new users to come to Duolingo, to half of them it teaches plurals before adjectives. To the other half it teaches adjectives before plurals. And then it measures which ones learn better. And so once and for all it can figure out, ah it turns out for this particular language to teach plurals before adjectives for example.

So every week the system is improving. It’s making itself better at teaching by learning from our learners. So it’s doing that just based on huge amounts of data. And this is why it’s become so successful I think at teaching and why we have so many users.

 

 

From DSC:
I see AI helping learners, instructors, teachers, and trainers. I see AI being a tool to help do some of the heavy lifting, but people still like to learn with other people…with actual human beings. That said, a next generation learning platform could be far more responsive than what today’s traditional institutions of higher education are delivering.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian