Also see:

Microsoft is building a virtual assistant for work. Google is building one for everything else — from qz.com by Dave Gershgorn

Excerpts:

In the early days of virtual personal assistants, the goal was to create a multipurpose digital buddy—always there, ready to take on any task. Now, tech companies are realizing that doing it all is too much, and instead doubling down on what they know best.

Since the company has a deep understanding of how organizations work, Microsoft is focusing on managing your workday with voice, rearranging meetings and turning the dials on the behemoth of bureaucracy in concert with your phone.

 

Voice is the next major platform, and being first to it is an opportunity to make the category as popular as Apple made touchscreens. To dominate even one aspect of voice technology is to tap into the next iteration of how humans use computers.

 

 

From DSC:
What affordances might these developments provide for our future learning spaces?

Will faculty members’ voices be recognized to:

  • Sign onto the LMS?
  • Dim the lights?
  • Turn on the projector(s) and/or display(s)?
  • Other?

Will students be able to send the contents of their mobile devices to particular displays via their voices?

Will voice be mixed in with augmented reality (i.e., the students and their devices can “see” which device to send their content to)?

Hmmm…time will tell.

 

 

INSIGHT: Ten ways machine learning will transform the practice of law — from news.bloomberglaw.com by Caroline Sweeney
Law firms are increasingly using machine learning and artificial intelligence, which have become standard in document review. Dorsey & Whitney’s Caroline Sweeney says any firm that wants to stay competitive should get on board now and gives examples for use and best practices.

 

 

Is your college future-ready? — from jisc.ac.uk by Robin Ghurbhurun

Excerpt:

Artificial intelligence (AI) is increasingly becoming science fact rather than science fiction. Alexa is everywhere from the house to the car, Siri is in the palm of your hand and students and the wider community can now get instant responses to their queries. We as educators have a duty to make sense of the information out there, working alongside AI to facilitate students’ curiosities.

Instead of banning mobile phones on campus, let’s manage our learning environments differently

We need to plan strategically to avoid a future where only the wealthy have access to human teachers, whilst others are taught with AI. We want all students to benefit from both. We should have teacher-approved content from VLEs and AI assistants supporting learning and discussion, everywhere from the classroom to the workplace. Let’s learn from the domestic market; witness the increasing rise of co-bot workers coming to an office near you.

 

 

Stanford team aims at Alexa and Siri with a privacy-minded alternative — from nytimes.com by John Markoff

Excerpt:

Now computer scientists at Stanford University are warning about the consequences of a race to control what they believe will be the next key consumer technology market — virtual assistants like Amazon’s Alexa and Google Assistant.

The group at Stanford, led by Monica Lam, a computer systems designer, last month received a $3 million grant from the National Science Foundation. The grant is for an internet service they hope will serve as a Switzerland of sorts for systems that use human language to control computers, smartphones and internet devices in homes and offices.

The researchers’ biggest concern is that virtual assistants, as they are designed today, could have a far greater impact on consumer information than today’s websites and apps. Putting that information in the hands of one big company or a tiny clique, they say, could erase what is left of online privacy.

 

Amazon sends Alexa developers on quest for ‘holy grail of voice science’ — from venturebeat.com by Khari Johnson

Excerpt:

At Amazon’s re:Mars conference last week, the company rolled out Alexa Conversations in preview. Conversations is a module within the Alexa Skills Kit that stitches together Alexa voice apps into experiences that help you accomplish complex tasks.

Alexa Conversations may be Amazon’s most intriguing and substantial pitch to voice developers in years. Conversations will make creating skills possible with fewer lines of code. It will also do away with the need to understand the many different ways a person can ask to complete an action, as a recurrent neural network will automatically generate dialogue flow.

For users, Alexa Conversations will make it easier to complete tasks that require the incorporation of multiple skills and will cut down on the number of interactions needed to do things like reserve a movie ticket or order food.

 

 
 

Facial recognition smart glasses could make public surveillance discreet and ubiquitous — from theverge.com by James Vincent; with thanks to Mr. Paul Czarapata, Ed.D. out on Twitter for this resource
A new product from UAE firm NNTC shows where this tech is headed next. <– From DSC: though hopefully not!!!

Excerpt:

From train stations and concert halls to sport stadiums and airports, facial recognition is slowly becoming the norm in public spaces. But new hardware formats like these facial recognition-enabled smart glasses could make the technology truly ubiquitous, able to be deployed by law enforcement and private security any time and any place.

The glasses themselves are made by American company Vuzix, while Dubai-based firm NNTC is providing the facial recognition algorithms and packaging the final product.

 

From DSC…I commented out on Twitter:

Thanks Paul for this posting – though I find it very troubling. Emerging technologies race out ahead of society. It would be interested in knowing the age of the people developing these technologies and if they care about asking the tough questions…like “Just because we can, should we be doing this?”

 

Addendum on 6/12/19:

 

‘Robots’ Are Not ‘Coming for Your Job’—Management Is — from gizmodo.com by Brian Merchant; with a special thanks going out to Keesa Johnson for her posting this out on LinkedIn

A robot is not ‘coming for’, or ‘stealing’ or ‘killing’ or ‘threatening’ to take away your job. Management is.

Excerpt (emphasis DSC):

At first glance, this might like a nitpicky semantic complaint, but I assure you it’s not—this phrasing helps, and has historically helped, mask the agency behind the *decision* to automate jobs. And this decision is not made by ‘robots,’ but management. It is a decision most often made with the intention of saving a company or institution money by reducing human labor costs (though it is also made in the interests of bolstering efficiency and improving operations and safety). It is a human decision that ultimately eliminates the job.

 

From DSC:
I’ve often said that if all the C-Suite cares about is maximizing profits — instead of thinking about their fellow humankind and society as a whole —  we’re in big trouble.

If the thinking goes, “Heh — it’s just business!” <– Again, then we’re in big trouble here.

Just because we can, should we? Many people should be reflecting upon this question…and not just members of the C-Suite.

 

 

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

From DSC:
I’m wondering to what extent artificial intelligence will be used to write code in the future…and/or to review/tweak/correct code…? Along these lines, see: “Introducing AI-Assisted Development to Elevate Low-Code Platforms to the Next Level.”

Excerpt:

Mendix was founded on the belief that software development could only be significantly improved if we introduced a paradigm shift. And that’s what we did. We fundamentally changed how software is created. With the current generation of the Mendix Platform, business applications can be created 10 times faster in close collaboration or even owned by the business, with IT being in control. Today we announce the next innovation, the introduction of AI-assisted development, which gives everyone the equivalent of a world-class coach looking over their shoulder.

 

 

To attract talent, corporations turn to MOOCs — from edsurge.com by by Wade Tyler Millward

Excerpt:

When executives at tech giants Salesforce and Microsoft decided in fall 2017 to turn to an online education platform to help train potential users of products for their vendors, they turned to Pierre Dubuc and his team in fall 2017.

Two years later, Dubuc’s company, OpenClassrooms, has closed deals with both of them. Salesforce has worked with OpenClassrooms to create and offer a developer-training course to help people learn how to use the Salesforce platform. In a similar vein, Microsoft will use the OpenClassrooms platform for a six-month course in artificial intelligence. If students complete the AI program, they are guaranteed a job within six months or get their money back. They also earn masters-level diploma accredited in Europe.

 

 

San Francisco becomes first city to bar police from using facial recognition— from cnet.com by Laura Hautala
It won’t be the last city to consider a similar law.

San Francisco becomes first city to bar police from using facial recognition

Excerpt:

The city of San Francisco approved an ordinance on Tuesday [5/14/19] barring the police department and other city agencies from using facial recognition technology on residents. It’s the first such ban of the technology in the country.

The ordinance, which passed by a vote of 8 to 1, also creates a process for the police department to disclose what surveillance technology they use, such as license plate readers and cell-site simulators that can track residents’ movements over time. But it singles out facial recognition as too harmful to residents’ civil liberties to even consider using.

“Facial surveillance technology is a huge legal and civil liberties risk now due to its significant error rate, and it will be worse when it becomes perfectly accurate mass surveillance tracking us as we move about our daily lives,” said Brian Hofer, the executive director of privacy advocacy group Secure Justice.

For example, Microsoft asked the federal government in July to regulate facial recognition technology before it gets more widespread, and said it declined to sell the technology to law enforcement. As it is, the technology is on track to become pervasive in airports and shopping centers and other tech companies like Amazon are selling the technology to police departments.

 

Also see:

 

Introduction: Leading the social enterprise—Reinvent with a human focus
2019 Global Human Capital Trends
— from deloitte.com by Volini?, Schwartz? ?, Roy?, Hauptmann, Van Durme, Denny, and Bersin

Excerpt (emphasis DSC):

Learning in the flow of life. The number-one trend for 2019 is the need for organizations to change the way people learn; 86 percent of respondents cited this as an important or very important issue. It’s not hard to understand why. Evolving work demands and skills requirements are creating an enormous demand for new skills and capabilities, while a tight labor market is making it challenging for organizations to hire people from outside. Within this context, we see three broader trends in how learning is evolving: It is becoming more integrated with work; it is becoming more personal; and it is shifting—slowly—toward lifelong models. Effective reinvention along these lines requires a culture that supports continuous learning, incentives that motivate people to take advantage of learning opportunities, and a focus on helping individuals identify and develop new, needed skills.

 

People, Power and Technology: The Tech Workers’ View — from doteveryone.org.uk

Excerpt:

People, Power and Technology: The Tech Workers’ View is the first in-depth research into the attitudes of the people who design and build digital technologies in the UK. It shows that workers are calling for an end to the era of moving fast and breaking things.

Significant numbers of highly skilled people are voting with their feet and leaving jobs they feel could have negative consequences for people and society. This is heightening the UK’s tech talent crisis and running up employers’ recruitment and retention bills. Organisations and teams that can understand and meet their teams’ demands to work responsibly will have a new competitive advantage.

While Silicon Valley CEOs have tried to reverse the “techlash” by showing their responsible credentials in the media, this research shows that workers:

    • need guidance and skills to help navigate new dilemmas
    • have an appetite for more responsible leadership
    • want clear government regulation so they can innovate with awareness

Also see:

  • U.K. Tech Staff Quit Over Work On ‘Harmful’ AI Projects — from forbes.com by Sam Shead
    Excerpt:
    An alarming number of technology workers operating in the rapidly advancing field of artificial intelligence say they are concerned about the products they’re building. Some 59% of U.K. tech workers focusing on AI have experience of working on products that they felt might be harmful for society, according to a report published on Monday by Doteveryone, the think tank set up by lastminute.com cofounder and Twitter board member Martha Lane Fox.

 

 

 

Watch Salvador Dalí Return to Life Through AI — from interestingengineering.com by
The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life.

Excerpt:

The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life. This life-size deepfake is set up to have interactive discussions with visitors.

The deepfake can produce 45 minutes of content and 190,512 possible combinations of phrases and decisions taken by the fake but realistic Dalí. The exhibition was created by Goodby, Silverstein & Partners using 6,000 frames of Dalí taken from historic footage and 1,000 hours of machine learning.

 

From DSC:
While on one hand, incredible work! Fantastic job! On the other hand, if this type of deepfake can be done, how can any video be trusted from here on out? What technology/app will be able to confirm that a video is actually that person, actually saying those words?

Will we get to a point that says, this is so and so, and I approved this video. Or will we have an electronic signature? Will a blockchain-based tech be used? I don’t know…there always seems to be pros and cons to any given technology. It’s how we use it. It can be a dream, or it can be a nightmare.

 

 

Microsoft debuts Ideas in Word, a grammar and style suggestions tool powered by AI — from venturebeat.com by Kyle Wiggers; with thanks to Mr. Jack Du Mez for his posting on this over on LinkedIn

Excerpt:

The first day of Microsoft’s Build developer conference is typically chock-full of news, and this year was no exception. During a keynote headlined by CEO Satya Nadella, the Seattle company took the wraps off a slew of updates to Microsoft 365, its lineup of productivity-focused, cloud-hosted software and subscription services. Among the highlights were a new AI-powered grammar and style checker in Word Online, dubbed Ideas in Word, and dynamic email messages in Outlook Mobile.

Ideas in Word builds on Editor, an AI-powered proofreader for Office 365 that was announced in July 2016 and replaced the Spelling & Grammar pane in Office 2016 later that year. Ideas in Words similarly taps natural language processing and machine learning to deliver intelligent, contextually aware suggestions that could improve a document’s readability. For instance, it’ll recommend ways to make phrases more concise, clear, and inclusive, and when it comes across a particularly tricky snippet, it’ll put forward synonyms and alternative phrasings.

 

Also see:

 

 
© 2024 | Daniel Christian