Should self-driving cars have ethics? — from npr.org by Laurel Wamsley

Excerpt:

In the not-too-distant future, fully autonomous vehicles will drive our streets. These cars will need to make split-second decisions to avoid endangering human lives — both inside and outside of the vehicles.

To determine attitudes toward these decisions a group of researchers created a variation on the classic philosophical exercise known as “the Trolley problem.” They posed a series of moral dilemmas involving a self-driving car with brakes that suddenly give out…

 

 

 

Reflections on “Are ‘smart’ classrooms the future?” [Johnston]

Are ‘smart’ classrooms the future? — from campustechnology.com by Julie Johnston
Indiana University explores that question by bringing together tech partners and university leaders to share ideas on how to design classrooms that make better use of faculty and student time.

Excerpt:

To achieve these goals, we are investigating smart solutions that will:

  • Untether instructors from the room’s podium, allowing them control from anywhere in the room;
  • Streamline the start of class, including biometric login to the room’s technology, behind-the-scenes routing of course content to room displays, control of lights and automatic attendance taking;
  • Offer whiteboards that can be captured, routed to different displays in the room and saved for future viewing and editing;
  • Provide small-group collaboration displays and the ability to easily route content to and from these displays; and
  • Deliver these features through a simple, user-friendly and reliable room/technology interface.

Activities included collaborative brainstorming focusing on these questions:

  • What else can we do to create the classroom of the future?
  • What current technology exists to solve these problems?
  • What could be developed that doesn’t yet exist?
  • What’s next?

 

 

 

From DSC:
Though many peoples’ — including faculty members’ — eyes gloss over when we start talking about learning spaces and smart classrooms, it’s still an important topic. Personally, I’d rather be learning in an engaging, exciting learning environment that’s outfitted with a variety of tools (physically as well as digitally and virtually-based) that make sense for that community of learners. Also, faculty members have very limited time to get across campus and into the classroom and get things setup…the more things that can be automated in those setup situations the better!

I’ve long posted items re: machine-to-machine communications, voice recognition/voice-enabled interfaces, artificial intelligence, bots, algorithms, a variety of vendors and their products including Amazon’s Alexa / Apple’s Siri / Microsoft’s Cortana / and Google’s Home or Google Assistant, learning spaces, and smart classrooms, as I do think those things are components of our future learning ecosystems.

 

 

 

AI can’t replace doctors. But it can make them better. — from technologyreview.com by Rahul Parikh; via Maree Conway
A machine can collate environmental data, genetic data, and patient history way better than I can.

Excerpts:

But for physicians like me in primary care, managing 1,500 to 2,000 patients, AI presents an opportunity. I went to medical school to connect with people and make a difference. Today I often feel like an overpaid bookkeeper instead, taking in information and spitting it back to patients, prescribing drugs and adjusting doses, ordering tests. But AI in the exam room opens up the chance to recapture the art of medicine. It could let me get to know my patients better, learn how a disease uniquely affects them, and give me time to coach them toward a better outcome.

AI might also help to manage asthma flares. For many patients, asthma gets worse as air pollution levels rise, as happened this past summer when brush fires swept through Northern California. AI could let us take environmental information and respond proactively.

Not long ago, in the Journal of the American Medical Association, I saw a colorful picture drawn by a child in crayon. It portrayed her pediatrician, eyes glued to the computer, while she sat on the exam table, looking wide-eyed. I hope that AI will soon allow me to turn my attention back to that little girl.

Rahul Parikh is a pediatrician in the San Francisco Bay area.

 

 

logo.

Global installed base of smart speakers to surpass 200 million in 2020, says GlobalData

The global installed base for smart speakers will hit 100 million early next year, before surpassing the 200 million mark at some point in 2020, according to GlobalData, a leading data and analytics company.

The company’s latest report: ‘Smart Speakers – Thematic Research’ states that nearly every leading technology company is either already producing a smart speaker or developing one, with Facebook the latest to enter the fray (launching its Portal device this month). The appetite for smart speakers is also not limited by geography, with China in particular emerging as a major marketplace.

Ed Thomas, Principal Analyst for Technology Thematic Research at GlobalData, comments: “It is only four years since Amazon unveiled the Echo, the first wireless speaker to incorporate a voice-activated virtual assistant. Initial reactions were muted but the device, and the Alexa virtual assistant it contained, quickly became a phenomenon, with the level of demand catching even Amazon by surprise.”

Smart speakers give companies like Amazon, Google, Apple, and Alibaba access to a vast amount of highly valuable user data. They also allow users to get comfortable interacting with artificial intelligence (AI) tools in general, and virtual assistants in particular, increasing the likelihood that they will use them in other situations, and they lock customers into a broader ecosystem, making it more likely that they will buy complementary products or access other services, such as online stores.

Thomas continues: “Smart speakers, particularly lower-priced models, are gateway devices, in that they give consumers the opportunity to interact with a virtual assistant like Amazon’s Alexa or Google’s Assistant, in a “safe” environment. For tech companies serious about competing in the virtual assistant sector, a smart speaker is becoming a necessity, hence the recent entry of Apple and Facebook into the market and the expected arrival of Samsung and Microsoft over the next year or so.”

In terms of the competitive landscape for smart speakers, Amazon was the pioneer and is still a dominant force, although its first-mover advantage has been eroded over the last year or so. Its closest challenger is Google, but neither company is present in the fastest-growing geographic market, China. Alibaba is the leading player there, with Xiaomi also performing well.

Thomas concludes: “With big names like Samsung and Microsoft expected to launch smart speakers in the next year or so, the competitive landscape will continue to fluctuate. It is likely that we will see two distinct markets emerge: the cheap, impulse-buy end of the spectrum, used by vendors to boost their ecosystems; and the more expensive, luxury end, where greater focus is placed on sound quality and aesthetics. This is the area of the market at which Apple has aimed the HomePod and early indications are that this is where Samsung’s Galaxy Home will also look to make an impact.”

Information based on GlobalData’s report: Smart Speakers – Thematic Research

 

 

 

 

Academics Propose a ‘Blockchain University,’ Where Faculty (and Algorithms) Rule — from edsurge.com by Jeff Young

Excerpt:

A group of academics affiliated with Oxford University have proposed a new model of higher education that replaces traditional administrators with “smart contracts” on the blockchain, the same technology that drives Bitcoin and other cryptocurrencies.

“Our aim is to create a university in which the bulk of administrative tasks are either eliminated or progressively automated,” said the effort’s founders in a white paper released earlier this year. Those proposing the idea added the university would be “a decentralised, non-profit, democratic community in which the use of blockchain technology will provide the contractual stability needed to pursue a full course of study.”

Experiments with blockchain in higher education are underway at multiple campuses around the country, and many of researchers are looking into how to use the technology to verify and deliver credentials. Massachusetts Institute for Technology, for example, began issuing diplomas via blockchain last year.

The plan by Oxford researchers goes beyond digital diplomas—and beyond many typical proposals to disrupt education in general. It argues for a completely new framework for how college is organized, how professors are paid, and how students connect with learning. In other words, it’s a long shot.

But even if the proposed platform never emerges, it is likely to spur debates about whether blockchain technology could one day allow professors to reclaim greater control of how higher education operates through digital contracts.

 

The platform would essentially allow professors to organize their own colleges, and teach and take payments from students directly. “

 

 

 

Gartner: Immersive experiences among top tech trends for 2019 — from campustechnology.com by Dian Schaffhauser

Excerpt:

IT analyst firm Gartner has named its top 10 trends for 2019, and the “immersive user experience” is on the list, alongside blockchain, quantum computing and seven other drivers influencing how we interact with the world. The annual trend list covers breakout tech with broad impact and tech that could reach a tipping point in the near future.

 

 

 

Reflections on “Inside Amazon’s artificial intelligence flywheel” [Levy]

Inside Amazon’s artificial intelligence flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.

Excerpt (emphasis DSC):

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

It took a lot of six-pagers to transform Amazon from a deep-learning wannabe into a formidable power. The results of this transformation can be seen throughout the company—including in a recommendations system that now runs on a totally new machine-learning infrastructure. Amazon is smarter in suggesting what you should read next, what items you should add to your shopping list, and what movie you might want to watch tonight. And this year Thirumalai started a new job, heading Amazon search, where he intends to use deep learning in every aspect of the service.

“If you asked me seven or eight years ago how big a force Amazon was in AI, I would have said, ‘They aren’t,’” says Pedro Domingos, a top computer science professor at the University of Washington. “But they have really come on aggressively. Now they are becoming a force.”

Maybe the force.

 

 

From DSC:
When will we begin to see more mainstream recommendation engines for learning-based materials? With the demand for people to reinvent themselves, such a next generation learning platform can’t come soon enough!

  • Turning over control to learners to create/enhance their own web-based learner profiles; and allowing people to say who can access their learning profiles.
  • AI-based recommendation engines to help people identify curated, effective digital playlists for what they want to learn about.
  • Voice-driven interfaces.
  • Matching employees to employers.
  • Matching one’s learning preferences (not styles) with the content being presented as one piece of a personalized learning experience.
  • From cradle to grave. Lifelong learning.
  • Multimedia-based, interactive content.
  • Asynchronously and synchronously connecting with others learning about the same content.
  • Online-based tutoring/assistance; remote assistance.
  • Reinvent. Staying relevant. Surviving.
  • Competency-based learning.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 

 

 

We’re about to embark on a period in American history where career reinvention will be critical, perhaps more so than it’s ever been before. In the next decade, as many as 50 million American workers—a third of the total—will need to change careers, according to McKinsey Global Institute. Automation, in the form of AI (artificial intelligence) and RPA (robotic process automation), is the primary driver. McKinsey observes: “There are few precedents in which societies have successfully retrained such large numbers of people.”

Bill Triant and Ryan Craig

 

 

 

Also relevant/see:

Online education’s expansion continues in higher ed with a focus on tech skills — from educationdive.com by James Paterson

Dive Brief:

  • Online learning continues to expand in higher ed with the addition of several online master’s degrees and a new for-profit college that offers a hybrid of vocational training and liberal arts curriculum online.
  • Inside Higher Ed reported the nonprofit learning provider edX is offering nine master’s degrees through five U.S. universities — the Georgia Institute of Technology, the University of Texas at Austin, Indiana University, Arizona State University and the University of California, San Diego. The programs include cybersecurity, data science, analytics, computer science and marketing, and they cost from around $10,000 to $22,000. Most offer stackable certificates, helping students who change their educational trajectory.
  • Former Harvard University Dean of Social Science Stephen Kosslyn, meanwhile, will open Foundry College in January. The for-profit, two-year program targets adult learners who want to upskill, and it includes training in soft skills such as critical thinking and problem solving. Students will pay about $1,000 per course, though the college is waiving tuition for its first cohort.

 

 

 

A cyber-skills shortage means students are being recruited to fight off hackers — from technologyreview.com by Erin Winick
Students with little or no cybersecurity knowledge are being paired with easy-to-use AI software that lets them protect their campus from attack.

Excerpt:

There aren’t enough cybersecurity workers out there—and things are getting worse. According to one estimate, by 2021 an estimated 3.5 million cybersecurity jobs will be unfilled. And of the candidates who apply, fewer than one in four are even qualified.

That’s why many large corporations are investing in longer-term solutions like mobile training trucks and apprenticeship programs. But Texas A&M University has found a way to solve its labor shortage in the short term. It’s pairing student security beginners with AI software.

The college’s Security Operations Center deals with about a million attempts to hack the university system each month. While the center does have some full-time employees, the majority of its security force is made up of students. Ten students currently work alongside AI softwareto detect, monitor, and remediate the threats.

 

 

 

 

In the 2030 and beyond world, employers will no longer be a separate entity from the education establishment. Pressures from both the supply and demand side are so large that employers and learners will end up, by default, co-designing new learning experiences, where all learning counts.

 

OBJECTIVES FOR CONVENINGS

  • Identify the skills everyone will need to navigate the changing relationship between machine intelligence and people over the next 10-12 years.
  • Develop implications for work, workers, students, working learners, employers, and policymakers.
  • Identify a preliminary set of actions that need to be taken now to best prepare for the changing work + learn ecosystem.

Three key questions guided the discussions:

  1. What are the LEAST and MOST essential skills needed for the future?
  2. Where and how will tomorrow’s workers and learners acquire the skills they really need?
  3. Who is accountable for making sure individuals can thrive in this new economy?

This report summarizes the experts’ views on what skills will likely be needed to navigate the work + learn ecosystem over the next 10–15 years—and their suggested steps for better serving the nation’s future needs.

 

In a new world of work, driven especially by AI, institutionally-sanctioned curricula could give way to AI-personalized learning. This would drastically change the nature of existing social contracts between employers and employees, teachers and students, and governments and citizens. Traditional social contracts would need to be renegotiated or revamped entirely. In the process, institutional assessment and evaluation could well shift from top-down to new bottom-up tools and processes for developing capacities, valuing skills, and managing performance through new kinds of reputation or accomplishment scores.

 

In October 2017, Chris Wanstrath, CEO of Github, the foremost code-sharing and social networking resource for programmers today, made a bold statement: “The future of coding is no coding at all.” He believes that the writing of code will be automated in the near future, leaving humans to focus on “higher-level strategy and design of software.” Many of the experts at the convenings agreed. Even creating the AI systems of tomorrow, they asserted, will likely require less human coding than is needed today, with graphic interfaces turning AI programming into a drag-and-drop operation.

Digital fluency does not mean knowing coding languages. Experts at both convenings contended that effectively “befriending the machine” will be less about teaching people to code and more about being able to empathize with AIs and machines, understanding how they “see the world” and “think” and “make decisions.” Machines will create languages to talk to one another.

Here’s a list of many skills the experts do not expect to see much of—if at all—in the future:

  • Coding. Systems will be self-programming.
  • Building AI systems. Graphic interfaces will turn AI programming into drag-and-drop operations.
  • Calendaring, scheduling, and organizing. There won’t be need for email triage.
  • Planning and even decision-making. AI assistants will pick this up.
  • Creating more personalized curricula. Learners may design more of their own personalized learning adventure.
  • Writing and reviewing resumes. Digital portfolios, personal branding, and performance reputation will replace resumes.
  • Language translation and localization. This will happen in real time using translator apps.
  • Legal research and writing. Many of our legal systems will be automated.
  • Validation skills. Machines will check people’s work to validate their skills.
  • Driving. Driverless vehicles will replace the need to learn how to drive.

Here’s a list of the most essential skills needed for the future:

  • Quantitative and algorithmic thinking.  
  • Managing reputation.  
  • Storytelling and interpretive skills.  
  • First principles thinking.  
  • Communicating with machines as machines.  
  • Augmenting high-skilled physical tasks with AI.
  • Optimization and debugging frame of mind.
  • Creativity and growth mindset.
  • Adaptability.
  • Emotional intelligence.
  • Truth seeking.
  • Cybersecurity.

 

The rise of machine intelligence is just one of the many powerful social, technological, economic, environmental, and political forces that are rapidly and disruptively changing the way everyone will work and learn in the future. Because this largely tech-driven force is so interconnected with other drivers of change, it is nearly impossible to understand the impact of intelligent agents on how we will work and learn without also imagining the ways in which these new tools will reshape how we live.

 

 

 

The coming revolution in software development — from forbes.com by Matt Bornstein

Excerpt:

Amid the deep learning hype, though, many observers miss the biggest reason to be optimistic about its future: deep learning requires coders to write very little actual code. Rather than relying on preset rules or if-then statements, a deep learning system writes rules automatically based on past examples. A software developer only has to create a “rough skeleton,” to paraphrase Andrej Karpathy from Tesla, then let the computers do the rest.

In this new world, developers no longer need to design a unique algorithm for each problem. Most work focuses, instead, on generating datasets that reflect desired behavior and managing the training process. Pete Warden from Google’s TensorFlow team pointed this outas far back as 2014: “I used to be a coder,” he wrote. “Now I teach computers to write their own programs.”

Again: the programming model driving the most important advances in software today does not require a significant amount of actual programming.

What does this mean for the future of software development?

 

 

 

An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

In a world where machines may have an IQ of 50,000 and the Internet of Things may encompass 500 billion devices, what will happen with those important social contracts, values and ethics that underpin crucial issues such as privacy, anonymity and free will?

 

 

My book identifies what I call the “Megashifts”. They are changing society at warp speed, and your organisations are in the eye of the storm: digitization, mobilisation and screenification, automation, intelligisation, disintermediation, virtualisation and robotisation, to name the most prominent. Megashifts are not simply trends or paradigm shifts, they are complete game changers transforming multiple domains simultaneously.

 

 

If the question is no longer about if technology can do something, but why…who decides this?

Gerd Leonhard

 

 

From DSC:
Though this letter was written 2 years ago back in October of 2016, the messages, reflections, and questions that Gerd puts on the table are very much still relevant today.  The leaders of these powerful companies have enormous power — power to do good, or to do evil. Power to help or power to hurt. Power to be a positive force for societies throughout the globe and to help create dreams, or power to create dystopian societies while developing a future filled with nightmares. The state of the human heart is extremely key here — though many will hate me saying that. But it’s true. At the end of the day, we need to very much care about — and be extremely aware of — the characters and values of the leaders of these powerful companies. 

 

 

Also relevant/see:

Spray-on antennas will revolutionize the Internet of Things — from networkworld.com by Patrick Nelson
Researchers at Drexel University have developed a method to spray on antennas that outperform traditional metal antennas, opening the door to faster and easier IoT deployments.

 From DSC:
Again, it’s not too hard to imagine in this arena that technologies can be used for good or for ill.

 

 

What does the Top Tools for Learning 2018 list tell us about the future direction of L&D? — from modernworkplacelearning.com by Jane Hart

Excerpt:

But for me 3 key things jump out:

  1. More and more people are learning for themselves – in whatever way that suits them best – whether it is finding resources or online courses on the Web or interacting with their professional network. And they do all this for a variety of reasons: to solve problems, self-improve and prepare themselves for the future, etc.
  2. Learning at work is becoming more personal and continuous in that it is a key part of many professional’s working day. And what’s more people are not only organising their own learning activities, they are also indeed managing their own development too – either with (informal) digital notebooks, or with (formal) personal learning platforms.
  3. But it is in team collaboration where most of their daily learning takes place, and many now recognise and value the social collaboration platforms that underpin their daily interactions with colleagues as part of their daily work.

In other words, many people now see workplace learning as not just something that happens irregularly in corporate training, but as a continuous and on demand activity.

 


From DSC:
Reminds me of tapping into — and contributing towards — streams of content. All the time. Continuous, lifelong learning.

 

 


 

 

 

NEW: The Top Tools for Learning 2018 [Jane Hart]

The Top Tools for Learning 2018 from the 12th Annual Digital Learning Tools Survey -- by Jane Hart

 

The above was from Jane’s posting 10 Trends for Digital Learning in 2018 — from modernworkplacelearning.com by Jane Hart

Excerpt:

[On 9/24/18],  I released the Top Tools for Learning 2018 , which I compiled from the results of the 12th Annual Digital Learning Tools Survey.

I have also categorised the tools into 30 different areas, and produced 3 sub-lists that provide some context to how the tools are being used:

  • Top 100 Tools for Personal & Professional Learning 2018 (PPL100): the digital tools used by individuals for their own self-improvement, learning and development – both inside and outside the workplace.
  • Top 100 Tools for Workplace Learning (WPL100): the digital tools used to design, deliver, enable and/or support learning in the workplace.
  • Top 100 Tools for Education (EDU100): the digital tools used by educators and students in schools, colleges, universities, adult education etc.

 

3 – Web courses are increasing in popularity.
Although Coursera is still the most popular web course platform, there are, in fact, now 12 web course platforms on the list. New additions this year include Udacity and Highbrow (the latter provides daily micro-lessons). It is clear that people like these platforms because they can chose what they want to study as well as how they want to study, ie. they can dip in and out if they want to and no-one is going to tell them off – which is unlike most corporate online courses which have a prescribed path through them and their use is heavily monitored.

 

 

5 – Learning at work is becoming personal and continuous.
The most significant feature of the list this year is the huge leap up the list that Degreed has made – up 86 places to 47th place – the biggest increase by any tool this year. Degreed is a lifelong learning platform and provides the opportunity for individuals to own their expertise and development through a continuous learning approach. And, interestingly, Degreed appears both on the PPL100 (at  30) and WPL100 (at 52). This suggests that some organisations are beginning to see the importance of personal, continuous learning at work. Indeed, another platform that underpins this, has also moved up the list significantly this year, too. Anders Pink is a smart curation platform available for both individuals and teams which delivers daily curated resources on specified topics. Non-traditional learning platforms are therefore coming to the forefront, as the next point further shows.

 

 

From DSC:
Perhaps some foreshadowing of the presence of a powerful, online-based, next generation learning platform…?

 

 

 

How AI could help solve some of society’s toughest problems — from technologyreview.com by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

Fei Fang has saved lives. But she isn’t a lifeguard, medical doctor, or superhero. She’s an assistant professor at Carnegie Mellon University, specializing in artificial intelligence for societal challenges.

At MIT Technology Review’s EmTech conference on Wednesday, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

How AI can be a force for good — from science.sciencemag.org by Mariarosaria Taddeo & Luciano Floridi

Excerpts:

Invisibility and Influence
AI supports services, platforms, and devices that are ubiquitous and used on a daily basis. In 2017, the International Federation of Robotics suggested that by 2020, more than 1.7 million new AI-powered robots will be installed in factories worldwide. In the same year, the company Juniper Networks issued a report estimating that, by 2022, 55% of households worldwide will have a voice assistant, like Amazon Alexa.

As it matures and disseminates, AI blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way. While creating new opportunities, this invisible integration of AI into our environments poses further ethical issues. Some are domain-dependent. For example, trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace. But the integration of AI also poses another fundamental risk: the erosion of human self-determination due to the invisibility and influencing power of AI.

To deal with the risks posed by AI, it is imperative to identify the right set of fundamental ethical principles to inform the design, regulation, and use of AI and leverage it to benefit as well as respect individuals and societies. It is not an easy task, as ethical principles may vary depending on cultural contexts and the domain of analysis. This is a problem that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems tackles with the aim of advancing public debate on the values and principles that should underpin ethical uses of AI.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

Excerpt:

“We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds,” Moore said. “I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do. This is now practical.”

Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.

“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee. “Googlers want less alignment with the military-industrial complex, not more. This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”

 

 

Organizations Are Gearing Up for More Ethical and Responsible Use of Artificial Intelligence, Finds Study — from businesswire.com
Ninety-two percent of AI leaders train their technologists in ethics; 74 percent evaluate AI outcomes weekly, says report from SAS, Accenture Applied Intelligence, Intel, and Forbes Insights

Excerpt:

AI oversight is not optional

Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognize that oversight is not optional for these technologies. Nearly three-quarters (74 percent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 percent). Additionally, 43 percent of AI leaders shared that their organization has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 percent).

 

 

 

Do robots have rights? Here’s what 10 people and 1 robot have to say — from createdigital.org.au
When it comes to the future of technology, nothing is straightforward, and that includes the array of ethical issues that engineers encounter through their work with robots and AI.

 

 

 

Microsoft's conference room of the future

 

From DSC:
Microsoft’s conference room of the future “listens” to the conversations of the team and provides a transcript of the meeting. It also is using “artificial intelligence tools to then act on what meeting participants say. If someone says ‘I’ll follow up with you next week,’ then they’ll get a notification in Microsoft Teams, Microsoft’s Slack competitor, to actually act on that promise.”

This made me wonder about our learning spaces in the future. Will an #AI-based device/cloud-based software app — in real-time — be able to “listen” to the discussion in a classroom and present helpful resources in the smart classroom of the future (i.e., websites, online-based databases, journal articles, and more)?

Will this be a feature of a next generation learning platform as well (i.e., addressing the online-based learning realm)? Will this be a piece of an intelligent tutor or an intelligent system?

Hmmm…time will tell.

 

 


 

Also see this article out at Forbes.com entitled, “There’s Nothing Artificial About How AI Is Changing The Workplace.” 

Here is an excerpt:

The New Meeting Scribe: Artificial Intelligence

As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).

Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.

 

 
© 2025 | Daniel Christian