A prediction for blockchain transformation in higher education  — from blockchain.capitalmarketsciooutlook.com by Michael Mathews

Excerpt:

Ironically, blockchain entered the scene in a very neutral way, while Bitcoin created all the noise, simply because it used an aspect of blockchain. Bitcoin, cyber coins, and/or token concepts will come and go, just as the various forms of web browsers did. However, just as the Internet lives on, so will blockchain. In fact, blockchain may very well become the best of the Internet and IoT merged with the trust factor of the ISBN/MARC code concept. As history unveils itself blockchain will stand the test of time and become a form of a future generation of the Internet (i.e. Internet 4.0) without the need for cyber security.

With a positive prediction on blockchain for future coupled with lessons learned from the Internet, blockchain will become the single largest influencer on education. I have only gone on record of predicting two shifts in technology over a 5-10 year period of time, and both have come to pass now. This is my third prediction, with the greatest potential for transformation.

 

 

What I did not know until last year was a neutral technology called blockchain would show up in the history of the world; and at the same time Amazon would start designing blockchain templates to reduce all the processes to allow educational decisions to become as easy as ordering and receiving Amazon products.

 

 

The Future Today Institute’s 12th Annual Emerging Tech Trends Report — from futuretodayinstitute.com

Excerpts:

At the Future Today Institute, we identify emerging tech trends and map the future for our clients. This is FTI’s 12th annual Tech Trends Report, and in it we identify 315 tantalizing advancements in emerging technologies — artificial intelligence, biotech, autonomous robots, green energy and space travel — that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. As of the publication date, the annual FTI Tech Trend Report report has garnered more than 7.5 cumulative views.

Key findings for 2019 (emphasis DSC)

  • Privacy is dead. (DC: NOT GOOD!!! If this is true, can the situation be reversed?)
  • Voice Search Optimization (VSO) is the new SEO.
  • The Big Nine.
  • Personal data records are coming. (DC: Including cloud-based learner profiles I hope.)
  • China continues to ascend, and not just in artificial intelligence.
  • Lawmakers around the world are not prepared to deal with new challenges that arise from emerging science and technology.
  • Consolidation continues as a key theme for 2019.

 

 

Law schools escalate their focus on digital skills — from edtechmagazine.com by Eli Zimmerman
Coding, data analytics and device integration give students the tools to become more efficient lawyers.

Excerpt:

Participants learned to use analytics programs and artificial intelligence to complete work in a fraction of the time it usually takes.

For example, students analyzed contracts using AI programs to find errors and areas for improvement across various legal jurisdictions. In another exercise, students learned to use data programs to draft nondisclosure agreements in less than half an hour.

By learning analytics models, students will graduate with the skills to make them more effective — and more employable — professionals.

“As advancing technology and massive data sets enable lawyers to answer complex legal questions with greater speed and efficiency, courses like Legal Analytics will help KU Law students be better advocates for tomorrow’s clients and more competitive for tomorrow’s jobs,” Stephen Mazza, dean of the University of Kansas School of Law, tells Legaltech News.

 

Reflecting that shift, the Law School Admission Council, which organizes and distributes the Law School Admission Test, will be offering the test exclusively on Microsoft Surface Go tablets starting in July 2019.

 

From DSC:
I appreciate the article, thanks Eli. From one of the articles that was linked to, it appears that, “To facilitate the transition to the Digital LSAT starting July 2019, LSAC is procuring thousands of Surface Go tablets that will be loaded with custom software and locked down to ensure the integrity of the exam process and security of the test results.”

 

 

 

 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 

Microsoft built a chat bot to match patients to clinical trials — from fortune.com by Dina Bass

Excerpt:

A chat bot that began as a hackathon project at Microsoft’s lab in Israel makes it easier for sick patients to find clinical trials that could provide otherwise unavailable medicines and therapies.

The Clinical Trials Bot lets patients and doctors search for studies related to a disease and then answer a succession of text questions. The bot then suggests links to trials that best match the patients’ needs. Drugmakers can also use it to find test subjects.

 

Half of all clinical trials for new drugs and therapies never reach the number of patients needed to start, and many others are delayed for the same reason, Bitran said. Meanwhile patients, sometimes desperately sick, find it hard to comb through the roughly 50,000 trials worldwide and their arcane and lengthy criteria—typically 20 to 30 factors. Even doctors struggle to search quickly on behalf of patients, Bitran said.

 

 

 

Joint CS and Philosophy Initiative, Embedded EthiCS, Triples in Size to 12 Courses — from thecrimson.com by Ruth Hailu and Amy Jia

Excerpt:

The idea behind the Embedded EthiCS initiative arose three years ago after students in Grosz’s course, CS 108: “Intelligent Systems: Design and Ethical Challenges,” pushed for an increased emphasis on ethical reasoning within discussions surrounding technology, according to Grosz and Simmons. One student suggested Grosz reach out to Simmons, who also recognized the importance of an interdisciplinary approach to computer science.

“Not only are today’s students going to be designing technology in the future, but some of them are going to go into government and be working on regulation,” Simmons said. “They need to understand how [ethical issues] crop up, and they need to be able to identify them.”

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

India Just Swore in Its First Robot Police Officer — from futurism.com by Dan Robitzski
RoboCop, meet KP-Bot.

Excerpt:

RoboCop
India just swore in its first robotic police officer, which is named KP-Bot.

The animatronic-looking machine was granted the rank of sub-inspector on Tuesday, and it will operate the front desk of Thiruvananthapuram police headquarters, according to India Today.

 

 

From DSC:
Whoa….hmmm…note to the ABA and to the legal education field — and actually to anyone involved in developing laws — we need to catch up. Quickly.

My thoughts go to the governments and to the militaries around the globe. Are we now on a slippery slope? How far along are the militaries of the world in integrating robotics and AI into their weapons of war? Quite far, I think.

Also, at the higher education level, are Computer Science and Engineering Departments taking their responsibilities seriously in this regard? What kind of teaching is being done (or not done) in terms of the moral responsibilities of their code? Their robots?

 

 

 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 

AR will spark the next big tech platform — call it Mirrorworld — from wired.com by Kevin Kelly

Excerpt:

It is already under construction. Deep in the research labs of tech companies around the world, scientists and engineers are racing to construct virtual places that overlay actual places. Crucially, these emerging digital landscapes will feel real; they’ll exhibit what landscape architects call place­ness. The Street View images in Google Maps are just facades, flat images hinged together. But in the mirrorworld, a virtual building will have volume, a virtual chair will exhibit chairness, and a virtual street will have layers of textures, gaps, and intrusions that all convey a sense of “street.”

The mirrorworld—a term first popularized by Yale computer scientist David Gelernter—will reflect not just what something looks like but its context, meaning, and function. We will interact with it, manipulate it, and experience it like we do the real world.

 

Also see:
Google Maps in augmented reality points you in the right direction — from mashable.com by Sasha Lekach

 

 

Bobst launches augmented reality helpline — from proprint.com.au by Sheree Young

Excerpt:

Swiss packaging and label equipment supplier Bobst has launched a new augmented reality smart headset to help answer customer questions.

Rapid problem solving thanks to a new augmented reality helpline service introduced by Swiss packaging and label equipment supplier Bobst stands to save printers time and money, the company says.

The Helpline Plus AR innovation provides a remote assistance service to Bobst’s customers using a smart headset with augmented reality glasses. The technology is being gradually rolled out globally, Bobst says.

Customers can use the headset to contact technical experts and iron out any issues they may be having as well as receive real time advice and support.

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian