From DSC:
When a professor walks into the room, the mobile device that the professor is carrying notifies the system to automatically establish his or her preferred settings for the room — and/or voice recognition allows a voice-based interface to adjust the room’s settings:

  • The lights dim to 50%
  • The projector comes on
  • The screen comes down
  • The audio is turned up to his/her liking
  • The LMS is logged into with his/her login info and launches the class that he/she is teaching at that time of day
  • The temperature is checked and adjusted if too high or low
  • Etc.
 

Guide to how artificial intelligence can change the world – Part 3 — from intelligenthq.com by Maria Fonseca and Paula Newton
This is part 3 of a Guide in 4 parts about Artificial Intelligence. The guide covers some of its basic concepts, history and present applications, possible developments in the future, and also its challenges as opportunities.

Excerpt:

Artificial intelligence is considered to be anything that gives machines intelligence which allows them to reason in the way that humans can. Machine learning is an element of artificial intelligence which is when machines are programmed to learn. This is brought about through the development of algorithms that work to find patterns, trends and insights from data that is input into them to help with decision making. Deep learning is in turn an element of machine learning. This is a particularly innovative and advanced area of artificial intelligence which seeks to try and get machines to both learn and think like people.

 

Also see:

 

Also see:

LinkedIn’s 2018 U.S. emerging jobs report — from economicgraph.linkedin.com

Excerpt (emphasis DSC):

Our biggest takeaways from this year’s Emerging Jobs Report:

  • Artificial Intelligence (AI) is here to stay. No, this doesn’t mean robots are coming for your job, but we are likely to see continued growth in fields and functions related to AI. This year, six out of the 15 emerging jobs are related in some way to AI, and our research shows that skills related to AI are starting to infiltrate every industry, not just tech. In fact, AI skills are among the fastest-growing skills on LinkedIn, and globally saw a 190% increase from 2015 to 2017.

 

 

From DSC:
How long before voice drives most appliances, thermostats, etc?

Hisense is bringing Android and AI smarts to its 2019 TV range — from techradar.com by Stephen Lambrechts
Some big announcements planned for CES 2019

Excerpt (emphasis DSC):

Hisense has announced that it will unveil the next evolution of its VIDAA smart TV platform at CES 2019 next month, promising to take full advantage of artificial intelligence with version 3.0.

Each television in Hisense’s 2019 ULED TV lineup will boast the updated VIDAA 3.0 AI platform, with Amazon Alexa functionality fully integrated into the devices, meaning you won’t need an Echo device to use Alexa voice control features.

 

 

 

Digital transformation reality check: 10 trends — from enterprisersproject.com by Stephanie Overby
2019 is the year when CIOs scrutinize investments, work even more closely with the CEO, and look to AI to shape strategy. What other trends will prove key?

Excerpt (emphasis DSC):

6. Technology convergence expands
Lines have already begun to blur between software development and IT operations thanks to the widespread adoption of DevOps. Meanwhile, IT and operational technology are also coming together in data-centric industries like manufacturing and logistics.

“A third convergence – that many are feeling but not yet articulating will have a profound impact on how CIOs structure and staff their organizations, design their architectures, build their budgets, and govern their operations – is the convergence of applications and infrastructure,” says Edwards. “In the digital age, it is nearly impossible to build a strategy for infrastructure that doesn’t include a substantial number of considerations for applications and vice versa.”

Most IT organizations still have heads of infrastructure and applications managing their own teams, but that may begin to change.

While most IT organizations still have heads of infrastructure and applications managing their own teams, that may begin to change as trends like software-defined infrastructure grow. “In 2019, CIOs will need to begin to grapple with the challenges to their operating models when the lines within the traditional IT tower blur and sometimes fade,” Edwards says.

 

 

All automated hiring software is prone to bias by default — from technologyreview.com

Excerpt:

new report out from nonprofit Upturn analyzed some of the most prominent hiring algorithms on the market and found that by default, such algorithms are prone to bias.

The hiring steps: Algorithms have been made to automate four primary stages of the hiring process: sourcing, screening, interviewing, and selection. The analysis found that while predictive tools were rarely deployed to make that final choice on who to hire, they were commonly used throughout these stages to reject people.

 

“Because there are so many different points in that process where biases can emerge, employers should definitely proceed with caution,” says Bogen. “They should be transparent about what predictive tools they are using and take whatever steps they can to proactively detect and address biases that arise—and if they can’t confidently do that, they should pull the plug.”

 

 

 

Forecast 5.0 – The Future of Learning: Navigating the Future of Learning  — from knowledgeworks.org by Katherine Prince, Jason Swanson, and Katie King
Discover how current trends could impact learning ten years from now and consider ways to shape a future where all students can thrive.

 

 

 

AI Now Report 2018 | December 2018  — from ainowinstitute.org

Meredith Whittaker , AI Now Institute, New York University, Google Open Research
Kate Crawford , AI Now Institute, New York University, Microsoft Research
Roel Dobbe , AI Now Institute, New York University
Genevieve Fried , AI Now Institute, New York University
Elizabeth Kaziunas , AI Now Institute, New York University
Varoon Mathur , AI Now Institute, New York University
Sarah Myers West , AI Now Institute, New York University
Rashida Richardson , AI Now Institute, New York University
Jason Schultz , AI Now Institute, New York University School of Law
Oscar Schwartz , AI Now Institute, New York University

With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York University)

Excerpt (emphasis DSC):

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
  3. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
  6.  Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
  10. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Also see:

After a Year of Tech Scandals, Our 10 Recommendations for AI — from medium.com by the AI Now Institute
Let’s begin with better regulation, protecting workers, and applying “truth in advertising” rules to AI

 

Also see:

Excerpt:

As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

 

From DSC:
This is a major heads up to the American Bar Association (ABA), law schools, governments, legislatures around the country, the courts, the corporate world, as well as for colleges, universities, and community colleges. The pace of emerging technologies is much faster than society’s ability to deal with them! 

The ABA and law schools need to majorly pick up their pace — for the benefit of all within our society.

 

 

 

The information below is from Heather Campbell at Chegg
(emphasis DSC)


 

Chegg Math Solver is an AI-driven tool to help the student understand math. It is more than just a calculator – it explains the approach to solving the problem. So, students won’t just copy the answer but understand and can solve similar problems at the same time. Most importantly,students can dig deeper into a problem and see why it’s solved that way. Chegg Math Solver.

In every subject, there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important concepts and terms are for a given subject, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand these terms and concepts, we’ve provided thousands of definitions, written and compiled by Chegg experts. Chegg Definition.

 

 

 

 

 


From DSC:
I see this type of functionality as a piece of a next generation learning platform — a piece of the Living from the Living [Class] Room type of vision. Great work here by Chegg!

Likely, students will also be able to take pictures of their homework, submit it online, and have that image/problem analyzed for correctness and/or where things went wrong with it.

 

 


 

 

Intelligent Machines: One of the fathers of AI is worried about its future — from technologyreview.com by Will Knight
Yoshua Bengio wants to stop talk of an AI arms race and make the technology more accessible to the developing world.

Excerpts:

Yoshua Bengio is a grand master of modern artificial intelligence.

Alongside Geoff Hinton and Yann LeCun, Bengio is famous for championing a technique known as deep learning that in recent years has gone from an academic curiosity to one of the most powerful technologies on the planet.

Deep learning involves feeding data to large neural networks that crudely simulate the human brain, and it has proved incredibly powerful and effective for all sorts of practical tasks, from voice recognition and image classification to controlling self-driving cars and automating business decisions.

Bengio has resisted the lure of any big tech company. While Hinton and LeCun joined Google and Facebook, respectively, he remains a full-time professor at the University of Montreal. (He did, however, cofound Element AI in 2016, and it has built a very successful business helping big companies explore the commercial applications of AI research.)

Bengio met with MIT Technology Review’s senior editor for AI, Will Knight, at an MIT event recently.

What do you make of the idea that there’s an AI race between different countries?

I don’t like it. I don’t think it’s the right way to do it.

We could collectively participate in a race, but as a scientist and somebody who wants to think about the common good, I think we’re better off thinking about how to both build smarter machines and make sure AI is used for the well-being of as many people as possible.

 

 

Alexa, get me the articles (voice interfaces in academia) — from blog.libux.co by Kelly Dagan

Excerpt:

Credit to Jill O’Neill, who has written an engaging consideration of applications, discussions, and potentials for voice-user interfaces in the scholarly realm. She details a few use case scenarios: finding recent, authoritative biographies of Jane Austen; finding if your closest library has an item on the shelf now (and whether it’s worth the drive based on traffic).

Coming from an undergraduate-focused (and library) perspective, I can think of a few more:

  • asking if there are any group study rooms available at 7 pm and making a booking
  • finding out if [X] is open now (Archives, the Cafe, the Library, etc.)
  • finding three books on the Red Brigades, seeing if they are available, and saving the locations
  • grabbing five research articles on stereotype threat, to read later

 

Also see:

 

 

 

10 predictions for tech in 2019 — from enterprisersproject.com by Carla Rudder
IT leaders look at the road ahead and predict what’s next for containers, security, blockchain, and more

Excerpts:

We asked IT leaders and tech experts what they see on the horizon for the future of technology. We intentionally left the question open-ended, and as a result, the answers represent a broad range of what IT professionals may expect to face in the new year. Let’s dig in…

3. Security becomes must-have developer skill.
Developers who have job interviews next year will see a new question added to the usual list.

5. Ethics take center stage with tech talent
Robert Reeves, CTO and co-founder, Datical: “More companies (prompted by their employees) will become increasingly concerned about the ethics of their technology. Microsoft is raising concerns of the dangers of facial recognition technology; Google employees are very concerned about their AI products being used by the Department of Defense. The economy is good for tech right now and the job market is becoming tighter. Thus, I expect those companies to take their employees’ concerns very seriously. Of course, all bets are off when (not if) we dip into a recession. But, for 2019, be prepared for more employees of tech giants to raise ethical concerns and for those concerns to be taken seriously and addressed.”’

7. Customers expect instant satisfaction
All customers will be the customer of ‘now,’ with expectations of immediate and personalized service; single-click approval for loans, sales quotes on the spot, and deliveries in hours instead of days. The window of opportunity for customer satisfaction will keep closing and technology will evolve to keep pace. Real-time analytics will become faster and smarter as data that is external to the organization, such as social, news and weather, will be included for more insights. The move to the cloud will accelerate with the growing adoption of open-source vendors.”

 

From DSC:
Regarding #7 above…as the years progress, how do you suppose this type of environment where people expect instant satisfaction and personalized service will impact education/training?

 

 

 

Beijing to judge every resident based on behavior by end of 2020 — from bloomberg.com

  • China capital plans ‘social credit’ system by end of 2020
  • Citizens with poor scores will be ‘unable to move’ a step

Excerpt:

China’s plan to judge each of its 1.3 billion people based on their social behavior is moving a step closer to reality, with  Beijing set to adopt a lifelong points program by 2021 that assigns personalized ratings for each resident.

The capital city will pool data from several departments to reward and punish some 22 million citizens based on their actions and reputations by the end of 2020, according to a plan posted on the Beijing municipal government’s website on Monday. Those with better so-called social credit will get “green channel” benefits while those who violate laws will find life more difficult.

The Beijing project will improve blacklist systems so that those deemed untrustworthy will be “unable to move even a single step,” according to the government’s plan.

 

From DSC:
Matthew 18:21-35 comes to mind big time here! I’m glad the LORD isn’t like this…we would all be in trouble.

 

 

Mama Mia It’s Sophia: A Show Robot Or Dangerous Platform To Mislead? — from forbes.com by Noel Sharkey

Excerpts:

A collective eyebrow was raised by the AI and robotics community when the robot Sophia was given Saudia citizenship in 2017 The AI sharks were already circling as Sophia’s fame spread with worldwide media attention. Were they just jealous buzz-kills or is something deeper going on? Sophia has gripped the public imagination with its interesting and fun appearances on TV and on high-profile conference platforms.

Sophia is not the first show robot to attain celebrity status. Yet accusations of hype and deception have proliferated about the misrepresentation of AI to public and policymakers alike. In an AI-hungry world where decisions about the application of the technologies will impact significantly on our lives, Sophia’s creators may have crossed a line. What might the negative consequences be? To get answers, we need to place Sophia in the context of earlier show robots.

 

 

A dangerous path for our rights and security
For me, the biggest problem with the hype surrounding Sophia is that we have entered a critical moment in the history of AI where informed decisions need to be made. AI is sweeping through the business world and being delegated decisions that impact significantly on peoples lives from mortgage and loan applications to job interviews, to prison sentences and bail guidance, to transport and delivery services to medicine and care.

It is vitally important that our governments and policymakers are strongly grounded in the reality of AI at this time and are not misled by hype, speculation, and fantasy. It is not clear how much the Hanson Robotics team are aware of the dangers that they are creating by appearing on international platforms with government ministers and policymakers in the audience.

 

 

Combining retrieval, spacing, and feedback boosts STEM learning — from retrievalpractice.org

Punchline:
Scientists demonstrated that when college students used a quizzing program that combined retrieval practice, spacing, and feedback, exam performance increased by nearly a letter grade.

—-

Abstract
The most effective educational interventions often face significant barriers to widespread implementation because they are highly specific, resource intense, and/or comprehensive. We argue for an alternative approach to improving education: leveraging technology and cognitive science to develop interventions that generalize, scale, and can be easily implemented within any curriculum. In a classroom experiment, we investigated whether three simple, but powerful principles from cognitive science could be combined to improve learning. Although implementation of these principles only required a few small changes to standard practice in a college engineering course, it significantly increased student performance on exams. Our findings highlight the potential for developing inexpensive, yet effective educational interventions that can be implemented worldwide.

In summary, the combination of spaced retrieval practice and required feedback viewing had a powerful effect on student learning of complex engineering material. Of course, the principles from cognitive science could have been applied without the use of technology. However, our belief is that advances in technology and ideas from machine learning have the potential to exponentially increase the effectiveness and impact of these principles. Automation is an important benefit, but technology also can provide a personalized learning experience for a rapidly growing, diverse body of students who have different knowledge and academic backgrounds. Through the use of data mining, algorithms, and experimentation, technology can help us understand how best to implement these principles for individual learners while also producing new discoveries about how people learn. Finally, technology facilitates access. Even if an intervention has a small effect size, it can still have a substantial impact if broadly implemented. For example, aspirin has a small effect on preventing heart attacks and strokes when taken regularly, but its impact is large because it is cheap and widely available. The synergy of cognitive science, machine learning, and technology has the potential to produce inexpensive, but powerful learning tools that generalize, scale, and can be easily implemented worldwide.

Keywords: Education. Technology. Retrieval practice. Spacing. Feedback. Transfer of learning.

 

 

From DSC:
I agree with futurist Thomas Frey:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet.”

(source)

 

Along these lines, see what Arizona State University is up to:

We think of this as a transformation away from a mass-production model to a mass-personalization model. For us, that’s the big win in this whole process. When we move away from the large lectures in that mass-production model that we’ve used for the last 170 years and get into something that reflects each of the individual learners’ needs and can personalize their learning path through the instructional resources, we will have successfully moved the education industry to the new frontier in the learning process. We think that mass personalization has already permeated every aspect of our lives, from navigation to entertainment; and education is really the next big frontier.

(source)

 

From DSC:
Each year the vision I outlined here gets closer and closer and closer and closer. With the advancements in Artificial Intelligence (AI), change is on the horizon…big time. Mass personalization. More choice. More control.

 

 
© 2025 | Daniel Christian