5 things you will see in the future “smart city” — from interestingengineering.com by Taylor Donovan Barnett
The Smart City is on the horizon and here are some of the crucial technologies part of it.

5 Things You Will See in the Future of the Smart City

Excerpt:

A New Framework: The Smart City
So, what exactly is a smart city? A smart city is an urban center that hosts a wide range of digital technology across its ecosystem. However, smart cities go far beyond just this definition.

Smart cities use technology to better population’s living experiences, operating as one big data-driven ecosystem.

The smart city uses that data from the people, vehicles, buildings etc. to not only improve citizens lives but also minimize the environmental impact of the city itself, constantly communicating with itself to maximize efficiency.

So what are some of the crucial components of the future smart city? Here is what you should know.

 

 

 

Hey bot, what’s next in line in chatbot technology? — from blog.engati.com by Imtiaz Bellary

Excerpts:

Scenario 1: Are bots the new apps?
Scenario 2: Bot conversations that make sense
Scenario 3: Can bots increase employee throughput?
Scenario 4: Let voice take over!

 

Voice as an input medium is catching up with an increasing number of folks adopting Amazon Echo and other digital assistants for their daily chores. Can we expect bots to gauge your mood and provide personalised experience as compared to a standard response? In regulated scenarios, voice acts as an authentication mechanism for the bot to pursue actions. Voice as an input adds sophistication and ease to do tasks quickly, thereby increasing user experience.

 

 

Forecast 5.0 – The Future of Learning: Navigating the Future of Learning  — from knowledgeworks.org by Katherine Prince, Jason Swanson, and Katie King
Discover how current trends could impact learning ten years from now and consider ways to shape a future where all students can thrive.

 

 

 

AI Now Report 2018 | December 2018  — from ainowinstitute.org

Meredith Whittaker , AI Now Institute, New York University, Google Open Research
Kate Crawford , AI Now Institute, New York University, Microsoft Research
Roel Dobbe , AI Now Institute, New York University
Genevieve Fried , AI Now Institute, New York University
Elizabeth Kaziunas , AI Now Institute, New York University
Varoon Mathur , AI Now Institute, New York University
Sarah Myers West , AI Now Institute, New York University
Rashida Richardson , AI Now Institute, New York University
Jason Schultz , AI Now Institute, New York University School of Law
Oscar Schwartz , AI Now Institute, New York University

With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York University)

Excerpt (emphasis DSC):

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
  3. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
  6.  Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
  10. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Also see:

After a Year of Tech Scandals, Our 10 Recommendations for AI — from medium.com by the AI Now Institute
Let’s begin with better regulation, protecting workers, and applying “truth in advertising” rules to AI

 

Also see:

Excerpt:

As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

 

From DSC:
This is a major heads up to the American Bar Association (ABA), law schools, governments, legislatures around the country, the courts, the corporate world, as well as for colleges, universities, and community colleges. The pace of emerging technologies is much faster than society’s ability to deal with them! 

The ABA and law schools need to majorly pick up their pace — for the benefit of all within our society.

 

 

 

The information below is from Heather Campbell at Chegg
(emphasis DSC)


 

Chegg Math Solver is an AI-driven tool to help the student understand math. It is more than just a calculator – it explains the approach to solving the problem. So, students won’t just copy the answer but understand and can solve similar problems at the same time. Most importantly,students can dig deeper into a problem and see why it’s solved that way. Chegg Math Solver.

In every subject, there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important concepts and terms are for a given subject, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand these terms and concepts, we’ve provided thousands of definitions, written and compiled by Chegg experts. Chegg Definition.

 

 

 

 

 


From DSC:
I see this type of functionality as a piece of a next generation learning platform — a piece of the Living from the Living [Class] Room type of vision. Great work here by Chegg!

Likely, students will also be able to take pictures of their homework, submit it online, and have that image/problem analyzed for correctness and/or where things went wrong with it.

 

 


 

 

Intelligent Machines: One of the fathers of AI is worried about its future — from technologyreview.com by Will Knight
Yoshua Bengio wants to stop talk of an AI arms race and make the technology more accessible to the developing world.

Excerpts:

Yoshua Bengio is a grand master of modern artificial intelligence.

Alongside Geoff Hinton and Yann LeCun, Bengio is famous for championing a technique known as deep learning that in recent years has gone from an academic curiosity to one of the most powerful technologies on the planet.

Deep learning involves feeding data to large neural networks that crudely simulate the human brain, and it has proved incredibly powerful and effective for all sorts of practical tasks, from voice recognition and image classification to controlling self-driving cars and automating business decisions.

Bengio has resisted the lure of any big tech company. While Hinton and LeCun joined Google and Facebook, respectively, he remains a full-time professor at the University of Montreal. (He did, however, cofound Element AI in 2016, and it has built a very successful business helping big companies explore the commercial applications of AI research.)

Bengio met with MIT Technology Review’s senior editor for AI, Will Knight, at an MIT event recently.

What do you make of the idea that there’s an AI race between different countries?

I don’t like it. I don’t think it’s the right way to do it.

We could collectively participate in a race, but as a scientist and somebody who wants to think about the common good, I think we’re better off thinking about how to both build smarter machines and make sure AI is used for the well-being of as many people as possible.

 

 

Alexa, get me the articles (voice interfaces in academia) — from blog.libux.co by Kelly Dagan

Excerpt:

Credit to Jill O’Neill, who has written an engaging consideration of applications, discussions, and potentials for voice-user interfaces in the scholarly realm. She details a few use case scenarios: finding recent, authoritative biographies of Jane Austen; finding if your closest library has an item on the shelf now (and whether it’s worth the drive based on traffic).

Coming from an undergraduate-focused (and library) perspective, I can think of a few more:

  • asking if there are any group study rooms available at 7 pm and making a booking
  • finding out if [X] is open now (Archives, the Cafe, the Library, etc.)
  • finding three books on the Red Brigades, seeing if they are available, and saving the locations
  • grabbing five research articles on stereotype threat, to read later

 

Also see:

 

 

 

The Insecurity of Things: a Brief History of US IoT Cybersecurity Legislation (Part 2) — from zone.com by Cate Lawrence
Check out these national attempts to legislate IoT security.

Excerpt:

There’s been a number of efforts over the last few years to legislate or provide a legal response to matters of cybersecurity. Part 1 of this article takes a look at recent efforts by California. This article examines the national attempts to legislate these poorly secured connected devices.

 

EXCLUSIVE: Chinese scientists are creating CRISPR babies — from technologyreview.com by Antonio Regalado
A daring effort is under way to create the first children whose DNA has been tailored using gene editing.

Excerpt:

When Chinese researchers first edited the genes of a human embryo in a lab dish in 2015, it sparked global outcry and pleas from scientists not to make a baby using the technology, at least for the present.

It was the invention of a powerful gene-editing tool, CRISPR, which is cheap and easy to deploy, that made the birth of humans genetically modified in an in vitro fertilization (IVF) center a theoretical possibility.

Now, it appears it may already be happening.

 

Where some see a new form of medicine that eliminates genetic disease, others see a slippery slope to enhancements, designer babies, and a new form of eugenics. 

 

 

Is Amazon’s algorithm cashing in on the Camp Fire by raising the cost of safety equipment? — from wired.co.uk by Matthew Chapman
Sudden and repeated price increases on fire extinguishers, axes and escape ladders sold on Amazon are seemingly linked to increased demand driven by California’s Camp Fire

Excerpt:

Amazon’s algorithm has allegedly been raising the price of fire safety equipment in response to increased demand during the California wildfires. The practice, known as surge pricing, has caused products including fire extinguishers and escape ladders to fluctuate significantly on Amazon, seemingly as a result of the retailer’s pricing system responding to increased demand.

An industry source with knowledge of the firm’s operations claims a similar price surge was triggered by the Grenfell Tower fire. A number of recent price rises coincide directly with the outbreak of the Camp Fire, which has been the deadliest in California’s history and resulted in at least 83 deaths.

 

From DSC:
I’ve been thinking a lot more about Amazon.com and Jeff Bezos in recent months, though I’m not entirely sure why. I think part of it has to do with the goals of capitalism.

If you want to see a winner in the way America trains up students, entrepreneurs, and business people, look no further than Jeff Bezos. He is the year-in-and-year-out champion of capitalism. He is the winner. He is the Michael Jordan of business. He is the top. He gets how the game is played and he’s a master at it. By all worldly standards, Jeff Bezos is the winner.

But historically speaking, he doesn’t come across like someone such as Bill Gates — someone who has used his wealth to literally, significantly, and positively change millions of lives. (Though finally that looks to be changing a bit, with the Bezos Day 1 Families Fund; the first grants of that fund total $97 million and will be given to 24 organizations working to address family homelessness. Source.)

Along those same lines — and expanding the scope a bit — I’m struggling with what the goals of capitalism are for us today…especially in an era of AI, algorithms, robotics, automation and the like. If the goal is simply to make as much profit as possible, we could be in trouble. If what occurs to people and families is much lower down the totem pole…what are the ramifications of that for our society? Yes, it’s a tough, cold world. But does it always have to be that way? What is the best, most excellent goal to pursue? What are we truly seeking to accomplish?

After my Uncle Chan died years ago, my Aunt Gail took over the family’s office supply business and ran it like a family. She cared about her employees and made decisions with an eye towards how things would impact her employees and their families. Yes, she had to make sound business decisions, but there was true caring in the way that she ran her business. I realize that the Amazon’s of the world are in a whole different league, but the values and principles involved here should not be lost just because of size.

 

To whom much is given…much is expected.

 

 

 

Also see:

GM to lay off 15 percent of salaried workers, halt production at five plants in U.S. and Canada — from washingtonpost.com by Taylor Telford

Wall Street applauded the news, with GM’s stock climbing more than 7 percent following the announcement.

 

From DSC:
Well, I bet those on Wall Street aren’t a part of the 15% of the folks being impacted. The applause is not heard at all from those folks who are being impacted today…whose families are being impacted today…and will be feeling the impact of these announcements for quite a while yet.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian