5 emerging tech trends impacting the enterprise — from campustechnology.com by Rhea Kelly

Excerpts:

Gartner’s Emerging Technologies Hype Cycle focuses specifically on new technologies (not previously highlighted in past Hype Cycles) that “show promise in delivering a high degree of competitive advantage over the next five to 10 years.” The five most impactful trends to watch this year are:

  1. Sensing and mobility.
  2. Augmented human.
  3. Postclassical compute and comms.
  4. Digital ecosystems.
  5. Advanced AI and analytics.
 

Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 

Technology as Part of the Culture for Legal Professionals -- a Q&A with Mary Grush and Daniel Christian

 


Technology as Part of the Culture for Legal Professionals A Q&A with Daniel Christian — from campustechnology.com by Mary Grush and Daniel Christian

Excerpt (emphasis DSC):

Mary Grush: Why should new technologies be part of a legal education?

Daniel Christian: I think it’s a critical point because our society, at least in the United States — and many other countries as well — is being faced with a dramatic influx of emerging technologies. Whether we are talking about artificial intelligence, blockchain, Bitcoin, chatbots, facial recognition, natural language processing, big data, the Internet of Things, advanced robotics — any of dozens of new technologies — this is the environment that we are increasingly living in, and being impacted by, day to day.

It is so important for our nation that legal professionals — lawyers, judges, attorney generals, state representatives, and legislators among them — be up to speed as much as possible on the technologies that surround us: What are the issues their clients and constituents face? It’s important that legal professionals regularly pulse check the relevant landscapes to be sure that they are aware of the technologies that are coming down the pike. To help facilitate this habit, technology should be part of the culture for those who choose a career in law. (And what better time to help people start to build that habit than within the law schools of our nation?)

 

There is a real need for the legal realm to catch up with some of these emerging technologies, because right now, there aren’t many options for people to pursue. If the lawyers, and the legislators, and the judges don’t get up to speed, the “wild wests” out there will continue until they do.

 


 

Gartner: Top Wireless Tech Trends to Watch — from campustechnology.com by Rhea Kelly

Excerpts:

The research firm identified 10 key wireless trends worth watching as the technology continues to develop over the next five years:

  • Vehicle-to-everything (V2X) wireless. This is the technology that will allow conventional cars, self-driving cars and the road infrastructure to all share information and status data.
  • Wireless sensing. This involves using the absorption and reflection of wireless signals as sensor data for radar tracking purposes. As an example, Gartner pointed to wireless sensing as an indoor radar system for robots and drones.
 

Is this the future of (low-cost) healthcare? — from computerworld.com by Johnny Evans
A Zipnostic pilot program in New York hints at how Apple tech could transform healthcare.

Excerpt:

The thing is, the home visit isn’t by a doctor but an onsite “care coordinator” equipped with a full set of professional testing equipment and direct video contact with the doctor.

The coordinator runs through tests using a high-resolution camera, ultrasound, EKG, glucometer, blood pressure, oximeter, and other state-of-the-art equipment, all of which is controlled using Zipnostic’s own apps.

Test data is made available to the real doctor at the end of the camera, who can take control of the testing procedure and provide an on-the-spot medical diagnosis based on real data.

The idea is that a diagnosis can be provided at around a fifteenth of the cost of a visit to the ER, and that the data driving the diagnosis can be much more accurate than you get from, say, a video chat using an app.

 

 

From DSC:
I just ran across this recently…what do you think of it?!

 

 

From DSC:
For me, this is extremely disturbing. And if I were a betting man, I’d wager that numerous nations/governments around the world — most certainly that includes the U.S. — have been developing new weapons of warfare for years that are based on artificial intelligence, robotics, automation, etc.

The question is, now what do we do?

Some very hard questions that numerous engineers and programmers need to be asking themselves these days…

By the way, the background audio on the clip above should either be non-existent or far more ominous — this stuff is NOT a joke.

Also see this recent posting. >>

 

Addendum on 6/26/19:

 

Experts in machine learning and military technology say it would be technologically straightforward to build robots that make decisions about whom to target and kill without a “human in the loop” — that is, with no person involved at any point between identifying a target and killing them. And as facial recognition and decision-making algorithms become more powerful, it will only get easier.

 

 

Research Posters Are a Staple of Academic Conferences. Could a New Design Speed Discovery? — from edsurge.com by Jeff Young

Excerpts:

Scholars around the world share their latest research findings with a decidedly low-tech ritual: printing a 48-inch by 36-inch poster densely packed with charts, graphs and blocks of text describing their research hypothesis, methods and findings. Then they stand with the poster in an exhibit hall for an hour, surrounded by rows of other researchers presenting similar posters, while hundreds of colleagues from around the world walk by trying to skim the displays.

Not only does the exercise deflate the morale of the scholars sharing posters, the ritual is incredibly inefficient at communicating science, Morrison argues.

Morrison says he has a solution: A better design for those posters, plus a dash of tech.

 

 

To make up for all the nuance and detail lost in this approach, the template includes a QR code that viewers can scan to get to the full research paper.

 

From DSC:
Wouldn’t this be great if more journal articles would do the same thing?  That is, give us the key findings, conclusions (with some backbone to them), and recommendations right away! Abstracts don’t go far enough, and often scholars/specialists are talking amongst themselves…not to the world. They could have a far greater reach/impact with this kind of approach.

(The QR code doesn’t make as much sense if one is already reading the full journal article…but the other items make a great deal of sense!)

 

 

Survey: Students Choosing Online Programs Closer to Home — from campustechnology.com by Dian Schaffhauser

 

Mentioned in that article:

 

Also see:

“It’s encouraging to see that a majority of students who are studying fully online are reporting great value and satisfaction with their online programs which are largely tied to ambitious career goals,” said Todd Zipper, president and CEO of Learning House, in a prepared statement. “With an increasing population of savvier consumers with high expectations, institutions need to do better at offering more quality, diverse programs that are sensitive to cost in order to keep up with the growing demands of online college students.”

 

From DSC:
If, in the year 2019, most students say online learning is as good or better than face-to-face, what will they say come 2025?  2035? 

Many people will still prefer to have F2F-based learning experiences no matter what year it is. That said, as the innovation continues to occur mainly in the digital/online/virtual realms, F2F will likely find it harder and harder to compete. My advice to current faculty members? Get experience teaching online — and do so as soon as you possibly can.

 

 

 

Watch Salvador Dalí Return to Life Through AI — from interestingengineering.com by
The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life.

Excerpt:

The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life. This life-size deepfake is set up to have interactive discussions with visitors.

The deepfake can produce 45 minutes of content and 190,512 possible combinations of phrases and decisions taken by the fake but realistic Dalí. The exhibition was created by Goodby, Silverstein & Partners using 6,000 frames of Dalí taken from historic footage and 1,000 hours of machine learning.

 

From DSC:
While on one hand, incredible work! Fantastic job! On the other hand, if this type of deepfake can be done, how can any video be trusted from here on out? What technology/app will be able to confirm that a video is actually that person, actually saying those words?

Will we get to a point that says, this is so and so, and I approved this video. Or will we have an electronic signature? Will a blockchain-based tech be used? I don’t know…there always seems to be pros and cons to any given technology. It’s how we use it. It can be a dream, or it can be a nightmare.

 

 
 

How MIT’s Mini Cheetah Can Help Accelerate Robotics Research — from spectrum.ieee.org by Evan Ackerman
Sangbae Kim talks to us about the new Mini Cheetah quadruped and his future plans for the robot

 

 

From DSC:
Sorry, but while the video/robot is incredible, a feeling in the pit of my stomach makes me reflect upon what’s likely happening along these lines in the militaries throughout the globe…I don’t mean to be a fear monger, but rather a realist.

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

India Just Swore in Its First Robot Police Officer — from futurism.com by Dan Robitzski
RoboCop, meet KP-Bot.

Excerpt:

RoboCop
India just swore in its first robotic police officer, which is named KP-Bot.

The animatronic-looking machine was granted the rank of sub-inspector on Tuesday, and it will operate the front desk of Thiruvananthapuram police headquarters, according to India Today.

 

 

From DSC:
Whoa….hmmm…note to the ABA and to the legal education field — and actually to anyone involved in developing laws — we need to catch up. Quickly.

My thoughts go to the governments and to the militaries around the globe. Are we now on a slippery slope? How far along are the militaries of the world in integrating robotics and AI into their weapons of war? Quite far, I think.

Also, at the higher education level, are Computer Science and Engineering Departments taking their responsibilities seriously in this regard? What kind of teaching is being done (or not done) in terms of the moral responsibilities of their code? Their robots?

 

 

 

LinkedIn 2019 Talent Trends: Soft Skills, Transparency and Trust — from linkedin.com by Josh Bersin

Excerpts:

This week LinkedIn released its 2019 Global Talent Trends research, a study that summarizes job and hiring data across millions of people, and the results are quite interesting. (5,165 talent and managers responded, a big sample.)

In an era when automation, AI, and technology has become more pervasive, important (and frightening) than ever, the big issue companies face is about people: how we find and develop soft skills, how we create fairness and transparency, and how we make the workplace more flexible, humane, and honest.

The most interesting part of this research is a simple fact: in today’s world of software engineering and ever-more technology, it’s soft skills that employers want. 91% of companies cited this as an issue and 80% of companies are struggling to find better soft skills in the market.

What is a “soft skill?” The term goes back twenty years when we had “hard skills” (engineering and science) so we threw everything else into the category of “soft.” In reality soft skills are all the human skills we have in teamwork, leadership, collaboration, communication, creativity, and person to person service. It’s easy to “teach” hard skills, but soft skills must be “learned.”

 

 

Also see:

Employers Want ‘Uniquely Human Skills’ — from campustechnology.com by Dian Schaffhauser

Excerpt:

According to 502 hiring managers and 150 HR decision-makers, the top skills they’re hunting for among new hires are:

  • The ability to listen (74 percent);
  • Attention to detail and attentiveness (70 percent);
  • Effective communication (69 percent);
  • Critical thinking (67 percent);
  • Strong interpersonal abilities (65 percent); and
  • Being able to keep learning (65 percent).
 

Training the workforce of the future: Education in America will need to adapt to prepare students for the next generation of jobs – including ‘data trash engineer’ and ‘head of machine personality design’– from dailymail.co.uk by Valerie Bauman

Excerpts:

  • Careers that used to safely dodge the high-tech bullet will soon require at least a basic grasp of things like web design, computer programming and robotics – presenting a new challenge for colleges and universities
  • A projected 85 percent of the jobs that today’s college students will have in 2030 haven’t been invented yet
  • The coming high-tech changes are expected to touch a wider variety of career paths than ever before
  • Many experts say American universities aren’t ready for the change because the high-tech skills most workers will need are currently focused just on people specializing in science, technology, engineering and math

.

 

 
© 2025 | Daniel Christian