How MIT’s Mini Cheetah Can Help Accelerate Robotics Research — from spectrum.ieee.org by Evan Ackerman
Sangbae Kim talks to us about the new Mini Cheetah quadruped and his future plans for the robot

 

 

From DSC:
Sorry, but while the video/robot is incredible, a feeling in the pit of my stomach makes me reflect upon what’s likely happening along these lines in the militaries throughout the globe…I don’t mean to be a fear monger, but rather a realist.

 

 

 

Jennifer Gonzalez on the Aerodynamics of Exceptional Schools | SXSW EDU

 

7 Things You Should Know About Accessibility Policy — from library.educause.edu

Excerpt:

Websites from the Accessible Technology Initiative (ATI) of the California State University, Penn State, the University of Virginia, and the Web Accessibility Initiative feature rich content related to IT accessibility policies. A California State University memorandum outlines specific responsibilities and reporting guidelines in support of CSU’s Policy on Disability Support and Accommodations. Cornell University developed a multiyear “Disability Access Management Strategic Plan.” Specific examples of accessibility policies focused on electronic communication and information technology can be found at Penn State, Purdue University, Yale University, and the University of Wisconsin– Madison. Having entered into a voluntary agreement with the National Federation of the Blind to improve accessibility, Wichita State University offers substantial accessibility-related resources for its community, including specific standards for ensuring accessibility in face-to face instruction.

 

 

Isaiah 58:6-11 New International Version (NIV) — from biblegateway.com

“Is not this the kind of fasting I have chosen:
to loose the chains of injustice
    and untie the cords of the yoke,
to set the oppressed free
    and break every yoke?
Is it not to share your food with the hungry
    and to provide the poor wanderer with shelter—
when you see the naked, to clothe them,
    and not to turn away from your own flesh and blood?
Then your light will break forth like the dawn,
    and your healing will quickly appear;
then your righteousness will go before you,
    and the glory of the Lord will be your rear guard.
Then you will call, and the Lord will answer;
    you will cry for help, and he will say: Here am I.

“If you do away with the yoke of oppression,
    with the pointing finger and malicious talk,
10 and if you spend yourselves in behalf of the hungry
    and satisfy the needs of the oppressed,
then your light will rise in the darkness,
    and your night will become like the noonday.
11 The Lord will guide you always;
    he will satisfy your needs in a sun-scorched land
    and will strengthen your frame.
You will be like a well-watered garden,
    like a spring whose waters never fail.

 

 

 

The Future of Leadership Development — from hbr.org by Mihnea Moldoveanu and Das Narayandas

Excerpt:

The need for leadership development has never been more urgent. Companies of all sorts realize that to survive in today’s volatile, uncertain, complex, and ambiguous environment, they need leadership skills and organizational capabilities different from those that helped them succeed in the past. There is also a growing recognition that leadership development should not be restricted to the few who are in or close to the C-suite. With the proliferation of collaborative problem-solving platforms and digital “adhocracies” that emphasize individual initiative, employees across the board are increasingly expected to make consequential decisions that align with corporate strategy and culture. It’s important, therefore, that they be equipped with the relevant technical, relational, and communication skills.

The good news is that the growing assortment of online courses, social and interactive platforms, and learning tools from both traditional institutions and upstarts—which make up what we call the “personal learning cloud” (PLC)—offers a solution. Organizations can select components from the PLC and tailor them to the needs and behaviors of individuals and teams. The PLC is flexible and immediately accessible, and it enables employees to pick up skills in the context in which they must be used. In effect, it’s a 21st-century form of on-the-job learning.

In this article we describe the evolution of leadership development, the dynamics behind the changes, and ways to manage the emerging PLC for the good of both the firm and the individual.

 

From DSC:
What they describe as a “personal learning cloud (PLC),” I call pieces of a Learning Ecosystem:

  • “…the growing assortment of online courses
  • social and interactive platforms
  • learning tools from both traditional institutions and upstarts”

 

 

 

Law firms either keep up with tech or get left behind — from abajournal.com by Gabriel Teninbaum

Excerpts:

I spend a lot of time thinking about a version of that classic interview question where applicants are asked to envision their future. But, instead of thinking about my own future, I think of the legal profession’s future. If you haven’t done it, give it a try: What will legal work look like in 15 years?

There is a reason to think it’ll look very different from it does now. E-discovery software now does the work once handled by new associates. Legal process outsourcing (LPO) companies have pulled due diligence work, and much more, to offshore locations (and away from domestic midsize firms). LegalZoom—now valued at $2 billion—is drawing millions of consumers every year choosing to handle legal matters without local attorneys.

If your vision includes the idea that the biggest legal employers may someday not even be law firms, then you’re correct. It’s already happened: The largest private provider of legal services in the world today is no longer a multinational law firm. It’s Deloitte, the Big Four accounting firm. Looming super-technologies—like AI and blockchain—are somewhere on the horizon, with the potential to upend legal work in ways that some believe will be unprecedented.

 

Also see:

Students create immersive videos to enhance criminal justice courses — from news.psu.edu by Emma Gosalvez

Excerpt:

Immersive technologies such as 360-degree videos could revolutionize the future of forensic science, giving police and criminologists a tool to visualize different crime scenes and ultimately, become better investigators. Through a Berks Teaching & Learning Innovation Partnership Grant, Penn State Berks students in the course CRIMJ 210: Policing in America are learning to create 360-degree videos of crime-scene scenarios.

These videos are viewed by their peers in CRIMJ 100: Introduction to Criminal Justice to learn about topics such as self-defense, defense of others, and defense of property.

“The project transforms student learning on two levels: It allows students to engage in creative collaboration related to a course topic, and students get to ‘experience’ the scenarios presented by the 360-degree videos created by their peers,” said Mary Ann Mengel, an instructional multimedia designer for Penn State Berks’ Center for Learning & Teaching.

 

 

The information below is from Deb Molfetta, Outreach Coordinator at EdDPrograms.org


EdDPrograms.org helps educators and administrators research doctoral education opportunities. Their organization’s work in education began in 2008 with projects ranging from a new teacher survival guide to their own teacher education scholarship program. More recently they realized that there weren’t any websites dedicated to professional development through Doctor of Education (EdD) programs, which is why they created their own – EdDPrograms.org. It covers a lot of ground, but here are a few sections they think administrators will appreciate:

EdDPrograms.org is owned and operated by a group that has been creating post-secondary education resources since 2008. According to Deb, they have a history of providing students with objective, fact-based resources.

 

 

 

135 Million Reasons To Believe In A Blockchain Miracle — from forbes.com by Mike Maddock

Excerpts:

Which brings us to the latest headlines about a cryptocurrency entrepreneur’s passing—taking with him the passcode to unlock C$180 million (about $135 million U.S.) in investor currency—which is now reportedly gone forever. Why? Because apparently, the promise of blockchain is true: It cannot be hacked. It is absolutely trustworthy.

Gerald Cotton, the CEO of a crypto company, reportedly passed away recently while building an orphanage in India. Unfortunately, he was the only person who knew the passcode to access the millions his investors had entrusted in him.

This is how we get the transition to Web 3.0.

Some questions to consider:

  • Who will build an easy-to-use “wallet” of the future?
  • Are we responsible enough to handle that much power?

Perhaps the most important question of all is: What role do our “trusted” experts play in this future?

 


From DSC:
I’d like to add another question to Mike’s article:

  • How should law schools, law firms, legislative bodies, government, etc. deal with the new, exponential pace of change and with the power of emerging technologies like , ,  ,  etc.?

 


 

 

Christensen Institute: Now’s the time for a makeover in college accreditation — from campustechnology.com by Dian Schaffhauser

Excerpt:

As authors Alana Dunagan and Michael Horn noted, “It is impossible to know what innovative models can or will be accredited, as similar initiatives are treated differently by different accreditors — even by the same accreditor at different points in time. Institutions that are able to innovate are those blessed by geography — a cooperative, forward-thinking regional accreditor — as well as finances.” Others believe they can’t afford to innovate or are in the position of facing an accreditation process that dings them when experimental programs fail and are shut down.

 

 

LinkedIn 2019 Talent Trends: Soft Skills, Transparency and Trust — from linkedin.com by Josh Bersin

Excerpts:

This week LinkedIn released its 2019 Global Talent Trends research, a study that summarizes job and hiring data across millions of people, and the results are quite interesting. (5,165 talent and managers responded, a big sample.)

In an era when automation, AI, and technology has become more pervasive, important (and frightening) than ever, the big issue companies face is about people: how we find and develop soft skills, how we create fairness and transparency, and how we make the workplace more flexible, humane, and honest.

The most interesting part of this research is a simple fact: in today’s world of software engineering and ever-more technology, it’s soft skills that employers want. 91% of companies cited this as an issue and 80% of companies are struggling to find better soft skills in the market.

What is a “soft skill?” The term goes back twenty years when we had “hard skills” (engineering and science) so we threw everything else into the category of “soft.” In reality soft skills are all the human skills we have in teamwork, leadership, collaboration, communication, creativity, and person to person service. It’s easy to “teach” hard skills, but soft skills must be “learned.”

 

 

Also see:

Employers Want ‘Uniquely Human Skills’ — from campustechnology.com by Dian Schaffhauser

Excerpt:

According to 502 hiring managers and 150 HR decision-makers, the top skills they’re hunting for among new hires are:

  • The ability to listen (74 percent);
  • Attention to detail and attentiveness (70 percent);
  • Effective communication (69 percent);
  • Critical thinking (67 percent);
  • Strong interpersonal abilities (65 percent); and
  • Being able to keep learning (65 percent).
 

Provosts count more on online programs — from insidehighered.com by Doug Lederman
More say they will increase emphasis on and allocate “major funds” to online offerings. Survey also finds solid but not spectacular support for open educational resources, and that backing for competency-based programs is more philosophical than practical.

Excerpt:

Increasing numbers of college and university chief academic officers plan to expand their online offerings and make major allocations of funds to online programs, a new survey by Inside Higher Ed shows.

The 2019 Survey of College and University Chief Academic Officers, published today by Inside Higher Ed in conjunction with Gallup, finds that 83 percent of provosts say they will increase their emphasis on expanding online programs and offerings. That figure has edged up slightly in recent years, from 79 percent in 2016.

A more significant rise has occurred in the proportion of academic officers anticipating a “major allocation of funds” to online programs. The survey asks provosts to assess the likelihood of increased funds to several categories of programs, including professional programs, STEM fields and the arts and sciences. As seen in the chart below, the share of provosts agreeing or strongly agreeing they would allocate major funds to online programs has grown to 56 percent this year from 46 percent four years earlier.

 

 

 

Top six AI and automation trends for 2019 — from forbes.com by Daniel Newman

Excerpt:

If your company hasn’t yet created a plan for AI and automation throughout your enterprise, you have some work to do. Experts believe AI will add nearly $16 trillion to the global economy by 2030, and 20 % of companies surveyed are already planning to incorporate AI throughout their companies next year. As 2018 winds down, now is the time to take a look at some trends and predictions for AI and automation that I believe will dominate the headlines in 2019—and to think about how you may incorporate them into your own company.

 

Also see — and an insert here from DSC:

Kai-Fu has a rosier picture than I do in regards to how humanity will be impacted by AI. One simply needs to check out today’s news to see that humans have a very hard time creating unity, thinking about why businesses exist in the first place, and being kind to one another…

 

 

 

How AI can save our humanity 

 

 

 

Big tech may look troubled, but it’s just getting started — from nytimes.com by David Streitfeld

Excerpt:

SAN JOSE, Calif. — Silicon Valley ended 2018 somewhere it had never been: embattled.

Lawmakers across the political spectrum say Big Tech, for so long the exalted embodiment of American genius, has too much power. Once seen as a force for making our lives better and our brains smarter, tech is now accused of inflaming, radicalizing, dumbing down and squeezing the masses. Tech company stocks have been pummeled from their highs. Regulation looms. Even tech executives are calling for it.

The expansion underlines the dizzying truth of Big Tech: It is barely getting started.

 

“For all intents and purposes, we’re only 35 years into a 75- or 80-year process of moving from analog to digital,” said Tim Bajarin, a longtime tech consultant to companies including Apple, IBM and Microsoft. “The image of Silicon Valley as Nirvana has certainly taken a hit, but the reality is that we the consumers are constantly voting for them.”

 

Big Tech needs to be regulated, many are beginning to argue, and yet there are worries about giving that power to the government.

Which leaves regulation up to the companies themselves, always a dubious proposition.

 

 

 

Facial recognition has to be regulated to protect the public, says AI report — from technologyreview.com by Will Knight
The research institute AI Now has identified facial recognition as a key challenge for society and policymakers—but is it too late?

Excerpt (emphasis DSC):

Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.

Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.

A new report from the AI Now Institute (large PDF), an influential research institute based in New York, has just identified facial recognition as a key challenge for society and policymakers.

 

Also see:

EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central
problem and addresses the following key issues:

  1. The growing accountability gap in AI, which favors those who create and deploy these
    technologies at the expense of those most affected
  2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial
    and affect recognition, increasing the potential for centralized control and oppression
  3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
  4. Unregulated and unmonitored forms of AI experimentation on human populations
  5. The limits of technological solutions to problems of fairness, bias, and discrimination

Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.

 

 

From DSC:
As I said in this posting, we need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have (and the more senior execs have not taken enough responsibility either)!

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know
what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

 

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

 

5 influencers predict AI’s impact on business in 2019 — from martechadvisor.com by Christine Crandell

Excerpt:

With Artificial Intelligence (AI) already proving its worth to adopters, it’s not surprising that an increasing number of companies will implement and leverage AI in 2019. Now, it’s no longer a question of whether AI will take off. Instead, it’s a question of which companies will keep up. Here are five predictions from five influencers on the impact AI will have on businesses in 2019, writes Christine Crandell, President, New Business Strategies.

 

 

Should we be worried about computerized facial recognition? — from newyorker.com by David Owen
The technology could revolutionize policing, medicine, even agriculture—but its applications can easily be weaponized.

 

Facial-recognition technology is advancing faster than the people who worry about it have been able to think of ways to manage it. Indeed, in any number of fields the gap between what scientists are up to and what nonscientists understand about it is almost certainly greater now than it has been at any time since the Manhattan Project. 

 

From DSC:
This is why law schools, legislatures, and the federal government need to become much more responsive to emerging technologies. The pace of technological change has changed. But have other important institutions of our society adapted to this new pace of change?

 

 

Andrew Ng sees an eternal springtime for AI — from zdnet.com by Tiernan Ray
Former Google Brain leader and Baidu chief scientist Andrew Ng lays out the steps companies should take to succeed with artificial intelligence, and explains why there’s unlikely to be another “AI winter” like in times past.

 

 

Google Lens now recognizes over 1 billion products — from venturebeat.com by Kyle Wiggers with thanks to Marie Conway for her tweet on this

Excerpt:

Google Lens, Google’s AI-powered analysis tool, can now recognize over 1 billion products from Google’s retail and price comparison portal, Google Shopping. That’s four times the number of objects Lens covered in October 2017, when it made its debut.

Aparna Chennapragada, vice president of Google Lens and augmented reality at Google, revealed the tidbit in a retrospective blog post about Google Lens’ milestones.

 

Amazon Customer Receives 1,700 Audio Files Of A Stranger Who Used Alexa — from npr.org by Sasha Ingber

Excerpt:

When an Amazon customer in Germany contacted the company to review his archived data, he wasn’t expecting to receive recordings of a stranger speaking in the privacy of a home.

The man requested to review his data in August under a European Union data protection law, according to a German trade magazine called c’t. Amazon sent him a download link to tracked searches on the website — and 1,700 audio recordings by Alexa that were generated by another person.

“I was very surprised about that because I don’t use Amazon Alexa, let alone have an Alexa-enabled device,” the customer, who was not named, told the magazine. “So I randomly listened to some of these audio files and could not recognize any of the voices.”

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian