Amazon, Microsoft, ‘putting world at risk of killer AI’: study — from news.yahoo.com by Issam Ahmed

Excerpt:

Washington (AFP) – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Addendum on 8/23/19:

 

AI is in danger of becoming too male — new research — from singularityhub.com by Juan Mateos-Garcia and Joysy John

Excerpts (emphasis DSC):

But current AI systems are far from perfect. They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations.

So do we really want to turn these bias-prone, brittle technologies into the foundation stones of tomorrow’s economy?

One way to minimize AI risks is to increase the diversity of the teams involved in their development. As research on collective decision-making and creativity suggests, groups that are more cognitively diverse tend to make better decisions. Unfortunately, this is a far cry from the situation in the community currently developing AI systems. And a lack of gender diversity is one important (although not the only) dimension of this.

A review published by the AI Now Institute earlier this year showed that less than 20 percent of the researchers applying to prestigious AI conferences are women, and that only a quarter of undergraduates studying AI at Stanford and the University of California at Berkeley are female.

 


From DSC:
My niece just left a very lucrative programming job and managerial role at Microsoft after working there for several years. As a single woman, she got tired of fighting the culture there. 

It was again a reminder to me that there are significant ramifications to the cultures of the big tech companies…especially given the power of these emerging technologies and the growing influence they are having on our culture.


Addendum on 8/20/19:

  • Google’s Hate Speech Detection A.I. Has a Racial Bias Problem — from fortunes.com by Jonathan Vanian
    Excerpt:
    A Google-created tool that uses artificial intelligence to police hate speech in online comments on sites like the New York Times has become racially biased, according to a new study. The tool, developed by Google and a subsidiary of its parent company, often classified comments written in the African-American vernacular as toxic, researchers from the University of Washington, Carnegie Mellon, and the Allen Institute for Artificial Intelligence said in a paper presented in early August at the Association for Computational Linguistics conference in Florence, Italy.
    .
  • On the positive side of things:
    Number of Female Students, Students of Color Tackling Computer Science AP on the Rise — from thejournal.com
 

DSC: Holy smokes!!! How might this be applied to education/learning/training in the 21st century!?!

DC: Holy smokes!!! How might this be applied to education/learning/training in the 21st century!?!

 

“What if neither distance nor language mattered? What if technology could help you be anywhere you need to be and speak any language? Using AI technology and holographic experiences this is possible, and it is revolutionary.”

 

 

Also see:

Microsoft has a wild hologram that translates HoloLens keynotes into Japanese — from theverge.com by
Azure and HoloLens combine for a hint at the future

Excerpt:

Microsoft has created a hologram that will transform someone into a digital speaker of another language. The software giant unveiled the technology during a keynote at the Microsoft Inspire partner conference [on 7/17/19] in Las Vegas. Microsoft recently scanned Julia White, a company executive for Azure, at a Mixed Reality capture studio to transform her into an exact hologram replica.

The digital version appeared onstage to translate the keynote into Japanese. Microsoft has used its Azure AI technologies and neural text-to-speech to make this possible. It works by taking recordings of White’s voice, in order to create a personalized voice signature, to make it sound like she’s speaking Japanese.

 

 

 

Microsoft’s new AI wants to help you crush your next presentation — from pcmag.com by Jake Leary
PowerPoint is receiving a slew of updates, including one that aims to help you improve your public speaking.

Excerpt:

Microsoft [on 6/18/19] announced several PowerPoint upgrades, the most notable of which is an artificial intelligence tool that aims to help you overcome pre-presentation jitters.

The Presenter Coach AI listens to you practice and offers real-time feedback on your pace, word choice, and more. It will, for instance, warn you if you’re using filler words like “umm” and “ahh,” profanities, non-inclusive language, or reading directly from your slides. At the end of your rehearsal, it provides a report with tips for future attempts. Presenter Coach arrives later this summer.

 

 

 Also see:

Microsoft is building a virtual assistant for work. Google is building one for everything else — from qz.com by Dave Gershgorn

Excerpts:

In the early days of virtual personal assistants, the goal was to create a multipurpose digital buddy—always there, ready to take on any task. Now, tech companies are realizing that doing it all is too much, and instead doubling down on what they know best.

Since the company has a deep understanding of how organizations work, Microsoft is focusing on managing your workday with voice, rearranging meetings and turning the dials on the behemoth of bureaucracy in concert with your phone.

 

Voice is the next major platform, and being first to it is an opportunity to make the category as popular as Apple made touchscreens. To dominate even one aspect of voice technology is to tap into the next iteration of how humans use computers.

 

 

From DSC:
What affordances might these developments provide for our future learning spaces?

Will faculty members’ voices be recognized to:

  • Sign onto the LMS?
  • Dim the lights?
  • Turn on the projector(s) and/or display(s)?
  • Other?

Will students be able to send the contents of their mobile devices to particular displays via their voices?

Will voice be mixed in with augmented reality (i.e., the students and their devices can “see” which device to send their content to)?

Hmmm…time will tell.

 

 

Blockchain: The move from freedom to the rigid, dominant system in learning — from oeb.global by Inge de Waard
In this post Inge de Waard gives an overview of current Blockchain options from industry and looks at its impact on universities as well as philosophises on its future.

Excerpt:

I mentioned a couple of Blockchain certification options already, but an even more advanced blockchain in learning example has entered on my radar too. It is a Russian implementation called Disciplina. This platform combines education (including vocational training), recruiting (comparable with what LinkedIn is doing with its economic graph) and careers for professionals. All of this is combined into a blockchain solution that keeps track of all the learners’ journey. The platform includes not only online courses as we know it but also coaching. After each training, you get a certificate.

TeachMePlease, which is a partner of Disciplina, enables teachers and students to find each other for specific professional training as well as curriculum-related children’s schooling. Admittedly, these initiatives are still being rolled out in terms of courses, but it clearly shows where the next learning will be located: in an umbrella above all the universities and professional academies. At present, the university courses are being embedded into course offerings by corporations that roll out a layer post-university, or post-vocational schooling.

Europe embraces blockchain, as can be seen with their EU Blockchain observatory and forum. And in a more national action, Malta is storing their certifications in a blockchain nationwide as well. We cannot deny that blockchain is getting picked up by both companies and governments. Universities have been piloting several blockchain certification options, and they also harbour some of the leading voices in the debate on blockchain certification.

 

Also see:

AI in education -- April 2019 by Inge de Waard

Future proof learning -- the Skills 3.0 project

 

Also see:

  • 7 blockchain mistakes and how to avoid them — from computerworld.com by Lucas Mearian
    The blockchain industry is still something of a wild west, with many cloud service offerings and a large universe of platforms that can vary greatly in their capabilities. So enterprises should beware jumping to conclusions about the technology.
 
 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

To attract talent, corporations turn to MOOCs — from edsurge.com by by Wade Tyler Millward

Excerpt:

When executives at tech giants Salesforce and Microsoft decided in fall 2017 to turn to an online education platform to help train potential users of products for their vendors, they turned to Pierre Dubuc and his team in fall 2017.

Two years later, Dubuc’s company, OpenClassrooms, has closed deals with both of them. Salesforce has worked with OpenClassrooms to create and offer a developer-training course to help people learn how to use the Salesforce platform. In a similar vein, Microsoft will use the OpenClassrooms platform for a six-month course in artificial intelligence. If students complete the AI program, they are guaranteed a job within six months or get their money back. They also earn masters-level diploma accredited in Europe.

 

 

San Francisco becomes first city to bar police from using facial recognition— from cnet.com by Laura Hautala
It won’t be the last city to consider a similar law.

San Francisco becomes first city to bar police from using facial recognition

Excerpt:

The city of San Francisco approved an ordinance on Tuesday [5/14/19] barring the police department and other city agencies from using facial recognition technology on residents. It’s the first such ban of the technology in the country.

The ordinance, which passed by a vote of 8 to 1, also creates a process for the police department to disclose what surveillance technology they use, such as license plate readers and cell-site simulators that can track residents’ movements over time. But it singles out facial recognition as too harmful to residents’ civil liberties to even consider using.

“Facial surveillance technology is a huge legal and civil liberties risk now due to its significant error rate, and it will be worse when it becomes perfectly accurate mass surveillance tracking us as we move about our daily lives,” said Brian Hofer, the executive director of privacy advocacy group Secure Justice.

For example, Microsoft asked the federal government in July to regulate facial recognition technology before it gets more widespread, and said it declined to sell the technology to law enforcement. As it is, the technology is on track to become pervasive in airports and shopping centers and other tech companies like Amazon are selling the technology to police departments.

 

Also see:

 

Microsoft debuts Ideas in Word, a grammar and style suggestions tool powered by AI — from venturebeat.com by Kyle Wiggers; with thanks to Mr. Jack Du Mez for his posting on this over on LinkedIn

Excerpt:

The first day of Microsoft’s Build developer conference is typically chock-full of news, and this year was no exception. During a keynote headlined by CEO Satya Nadella, the Seattle company took the wraps off a slew of updates to Microsoft 365, its lineup of productivity-focused, cloud-hosted software and subscription services. Among the highlights were a new AI-powered grammar and style checker in Word Online, dubbed Ideas in Word, and dynamic email messages in Outlook Mobile.

Ideas in Word builds on Editor, an AI-powered proofreader for Office 365 that was announced in July 2016 and replaced the Spelling & Grammar pane in Office 2016 later that year. Ideas in Words similarly taps natural language processing and machine learning to deliver intelligent, contextually aware suggestions that could improve a document’s readability. For instance, it’ll recommend ways to make phrases more concise, clear, and inclusive, and when it comes across a particularly tricky snippet, it’ll put forward synonyms and alternative phrasings.

 

Also see:

 

 

From LinkedIn.com today:

 


Also see:


 

From DSC:
I don’t like this at all. If this foot gets in the door, vendor after vendor will launch their own hordes of drones. In the future, where will we go if we want some piece and quiet? Will the air be filled with swarms of noisy drones? Will we be able to clearly see the sun? An exaggeration..? Maybe…maybe not.

But, now what? What recourse do citizens have? Readers of this blog know that I’m generally pro-technology. But the folks — especially the youth — working within the FAANG companies (and the like) need to do a far better job asking, “Just because we can do something, should we do it?”

As I’ve said before, we’ve turned over the keys to the $137,000 Maserati to drivers who are just getting out of driving school. Then we wonder….”How did we get to this place?” 

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

As another example, just because we can…

just because we can does not mean we should

 

…doesn’t mean we should.

 

just because we can does not mean we should

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 

The Growing Profile of Non-Degree Credentials: Diving Deeper into ‘Education Credentials Come of Age’ — from evolllution.com by Sean Gallagher
Higher education is entering a “golden age” of lifelong learning and that will mean a spike in demand for credentials. If postsecondary institutions want to compete in a crowded market, they need to change fast.

Excerpts (emphasis DSC):

One of the first levels of opportunity is simply embedding the skills that are demanded in the job market into educational programs. Education certainly has its own merits independent of professional outcomes. But critics of higher education who suggest graduates aren’t prepared for the workforce have a point in terms of the opportunity for greater job market alignment, and less of an “ivory tower” mentality at many institutions. Importantly, this does not mean that there isn’t value in the liberal arts and in broader ways of thinking—problem solving, leadership, critical thinking, analysis, and writing are among the very top skills demanded by employers across all educational levels. These are foundational and independent of technical skills.

The second opportunity is building an ecosystem for better documentation and sharing of skills—in a sense what investor Ryan Craig has termed a “competency marketplace.” Employers’ reliance on college degrees as relatively blunt signals of skill and ability is partly driven by the fact that there aren’t many strong alternatives. Technology—and the growth of platforms like LinkedIn, ePortfolios and online assessments—is changing the game. One example is digital badges, which were originally often positioned as substitutes to degrees or certificates.

Instead, I believe digital badges are a supplement to degrees and we’re increasingly seeing badges—short microcredentials that discretely and digitally document competency—woven into degree programs, from the community college to the graduate degree level.

 

However, it is becoming increasingly clear that the market is demanding more “agile” and shorter-form approaches to education. Many institutions are making this a strategic priority, especially as we read the evolution of trends in the global job market and soon enter the 2020s.

Online education—which in all its forms continues to slowly and steadily grow its market share in terms of all higher ed instruction—is certainly an enabler of this vision, given what we know about pedagogy and the ability to digitally document outcomes.

 

In addition, 64 percent of the HR leaders we surveyed said that the need for ongoing lifelong learning will demand higher levels of education and more credentials in the future.

 

Along these lines of online-based collaboration and learning,
go to the 34 minute mark of this video:

 

From DSC:
The various pieces are coming together to build the next generation learning platform. Although no one has all of the pieces yet, the needs/trends/signals are definitely there.

 

Daniel Christian-- Learning from the Living Class Room

 

Addendums on 4/20/19:

 

 

Microsoft and OpenClassrooms to train students to fill high-demand AI jobs — from news.microsoft.com

Excerpt:

Strategic partnership aims to address the talent gap in technology hiring
PARIS – April 3, 2019 – Microsoft Corp. and online education leader OpenClassrooms are announcing a new partnership to train and prepare students for artificial intelligence (AI) jobs in the workplace. The collaboration is designed to provide more students with access to education to learn in-demand skills and to qualify for high-tech jobs, while giving employers access to great talent to fill high-tech roles.

The demand for next-generation artificial intelligence skills has far outpaced the number of candidates in the job market. One estimate suggests that, by 2022, a talent shortage will leave as many as 30% of AI and data skills jobs open.

 

Students who complete the program are guaranteed a job within six months or they will receive a full refund from OpenClassrooms.

 

Also see:

Tesla START: Student Automotive Technician Program

Excerpt:

Tesla START is an intensive training program designed to provide students across North America with the skills necessary for a successful career with Tesla—at the forefront of the electric vehicle revolution. During the program, students will develop technical expertise and earn certifications through a blended approach of in-class theory, hands-on labs and self-paced learning.

We are partnering with colleges across the country to integrate Tesla START into automotive technician curriculums as a 12-week capstone—providing students with a smooth transition from college to full-time employment.

 
© 2024 | Daniel Christian