10 predictions for the future of the IoT — from bbntimes.com by Ahmed Banafa

 

 

Also see:

  • How Artificial Intelligence will kickstart the Internet of Things — from bbntimes.com by Ahmed Banafa
    Excerpt:
    Examples of such IoT data: 

    • Data that helps cities predict accidents and crimes
    • Data that gives doctors real-time insight into information from pacemakers or biochips
    • Data that optimize productivity across industries through predictive maintenance on equipment and machinery
    • Data that creates truly smart homes with connected appliances
    • Data that provides critical communication between self-driving cars

 

 

Map of fundamental technologies in legal services — from remakinglawfirms.com by Michelle Mahoney

Excerpt:
The Map is designed to help make sense of the trends we are observing:

  • an increasing number of legal technology offerings;
  • the increasing effectiveness of legal technologies;
  • emerging new categories of legal technology;
  • the layering and combining of fundamental technology capabilities; and
  • the maturation of machine learning, natural language processing and deep learning artificial intelligence.

Given the exponential nature of the technologies, the Fundamental Technologies Map can only depict the landscape at the current point in time.

 

Information processing in legal services (PDF file)

 

Also see:
Delta Model Update: The Most Important Area of Lawyer Competency — Personal Effectiveness Skills — from legalexecutiveinstitute.comby Natalie Runyon

Excerpt:

Many legal experts say the legal industry is at an inflection point because the pace of change is being driven by many factors — technology, client demand, disaggregation of matter workflow, the rise of Millennials approaching mid-career status, and the faster pace of business in general.

The fact that technology spend by law firms continues to be a primary area of investment underscores the fact that the pace of change is continuing to accelerate with the ongoing rise of big data and workflow technology that are greatly influencing how lawyering gets done. Moreover, combined with big unstructured data, artificial intelligence (AI) is creating opportunities to analyze siloed data sets to gain insights in numerous new ways.

 

 

Data visualization via VR and AR: How we’ll interact with tomorrow’s data — from zdnet.comby Greg Nichols

Excerpt:

But what if there was a way to visualize huge data sets that instantly revealed important trends and patterns? What if you could interact with the data, move it around, literally walk around it? That’s one of the lesser talked about promises of mixed reality. If developers can deliver on the promise, it just may be one of the most important enterprise applications of those emerging technologies, as well.

 

In the future, we may be using MR to walk around data and to better visualize data

 

Also see:

 

 
 

Collaboration technology is fueling enterprise transformation – increasing agility, driving efficiency and improving productivity. Join Amy Chang at Enterprise Connect where she will share Cisco’s vision for the future of collaboration, the foundations we have in place and the amazing work we’re driving to win our customers’ hearts and minds. Cognitive collaboration – technology that weaves context and intelligence across applications, devices and workflows, connecting people with customers & colleagues, to deliver unprecedented experiences and transform how we work – is at the heart of our efforts. Join this session to see our technology in action and hear how our customers are using our portfolio of products today to transform the way they work.

 

 

 

 

4 key tech strategies for the survival of the small liberal arts college — from campustechnology.com by Kellie B. Campbell
In a recent study on the use of technology to reduce academic costs in liberal arts colleges, four distinct themes emerged: the strategic role of IT; the importance of data; the potential of alternative education delivery modes; and opportunities for institutional partnerships. Here’s how IT leaders at these small colleges understand the future of their institutions.

Excerpt:

In this study, the flexibility of the semi-constructed interview format resulted in a fascinating level of honesty and bluntness from participants. In particular, participants’ language changed when they were asked to take off their professional hat and consider a new point of view — it was a chance to be vulnerable and honest. What was probably most interesting was that almost everyone signaled that the status quo is not sustainable. Something in the higher education model has to change for institutions to stay open, yet many lack a strategy for effecting change. Even if they do have a strategy in place on the business side, many are hesitant to dive into analysis and change on the academic side of the institution.

Institutions simply cannot continue to nibble at the edges of change. Significant change is needed in order to sustain the financial model of higher education. The ideas for doing so are out there, though the work must be guided by the institutional mission and consider new models for delivering education. CIOs and their departments can play an important role in that work — providing infrastructure, data, access, services and ideas — but institutional leadership at large needs to understand IT’s strategic role and position the organization to make that impact.

When participants were able to think about the “what if” question — what if the institution were forced to drastically cut academic costs — several had detailed, “out there” ideas that might not be traditionally welcomed into higher education cultures. Yet a number of participants were not being asked by their institutions to think about such ideas. The question is, if everyone agrees that the status quo is not sustainable, why aren’t they thinking about it?

 

 

Law schools escalate their focus on digital skills — from edtechmagazine.com by Eli Zimmerman
Coding, data analytics and device integration give students the tools to become more efficient lawyers.

Excerpt:

Participants learned to use analytics programs and artificial intelligence to complete work in a fraction of the time it usually takes.

For example, students analyzed contracts using AI programs to find errors and areas for improvement across various legal jurisdictions. In another exercise, students learned to use data programs to draft nondisclosure agreements in less than half an hour.

By learning analytics models, students will graduate with the skills to make them more effective — and more employable — professionals.

“As advancing technology and massive data sets enable lawyers to answer complex legal questions with greater speed and efficiency, courses like Legal Analytics will help KU Law students be better advocates for tomorrow’s clients and more competitive for tomorrow’s jobs,” Stephen Mazza, dean of the University of Kansas School of Law, tells Legaltech News.

 

Reflecting that shift, the Law School Admission Council, which organizes and distributes the Law School Admission Test, will be offering the test exclusively on Microsoft Surface Go tablets starting in July 2019.

 

From DSC:
I appreciate the article, thanks Eli. From one of the articles that was linked to, it appears that, “To facilitate the transition to the Digital LSAT starting July 2019, LSAC is procuring thousands of Surface Go tablets that will be loaded with custom software and locked down to ensure the integrity of the exam process and security of the test results.”

 

 

 

 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 
 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

Google and Microsoft warn that AI may do dumb things — from wired.com by Tom Simonite

Excerpt:

Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

 

Chinese company leaves Muslim-tracking facial recognition database exposed online — from by Catalin Cimpanu
Researcher finds one of the databases used to track Uyghur Muslim population in Xinjiang.

Excerpt:

One of the facial recognition databases that the Chinese government is using to track the Uyghur Muslim population in the Xinjiang region has been left open on the internet for months, a Dutch security researcher told ZDNet.

The database belongs to a Chinese company named SenseNets, which according to its website provides video-based crowd analysis and facial recognition technology.

The user data wasn’t just benign usernames, but highly detailed and highly sensitive information that someone would usually find on an ID card, Gevers said. The researcher saw user profiles with information such as names, ID card numbers, ID card issue date, ID card expiration date, sex, nationality, home addresses, dates of birth, photos, and employer.

Some of the descriptive names associated with the “trackers” contained terms such as “mosque,” “hotel,” “police station,” “internet cafe,” “restaurant,” and other places where public cameras would normally be found.

 

From DSC:
Readers of this blog will know that I’m generally pro-technology. But especially focusing in on that last article, to me, privacy is key here. For which group of people from which nation is next? Will Country A next be tracking Christians? Will Country B be tracking people of a given sexual orientation? Will Country C be tracking people with some other characteristic?

Where does it end? Who gets to decide? What will be the costs of being tracked or being a person with whatever certain characteristic one’s government is tracking? What forums are there for combating technologies or features of technologies that we don’t like or want?

We need forums/channels for raising awareness and voting on these emerging technologies. We need informed legislators, senators, lawyers, citizens…we need new laws here…asap.

 

 

 

Learning and Student Success: Presenting the Results of the 2019 Key Issues Survey — from er.educause.edu by Malcolm Brown

Excerpts:

Here are some results that caught (Malcolm’s) eye, with a few speculations tossed in:

  • The issue of faculty development reclaimed the top spot.
  • Academic transformation, previously a consistent top-three finisher, took a tumble in 2019 down to 10th.
  • After falling to 16th last year, the issue of competency-based education and new methods of learning assessment jumped up to 6th for 2019.
  • The issues of accessibility and universal design for learning (UDL) and of digital and information literacy held more or less steady.
  • Online and blended learning has rebounded significantly.

 

 

 

Why Facebook’s banned “Research” app was so invasive — from wired.com by Louise Matsakislo

Excerpts:

Facebook reportedly paid users between the ages of 13 and 35 $20 a month to download the app through beta-testing companies like Applause, BetaBound, and uTest.


Apple typically doesn’t allow app developers to go around the App Store, but its enterprise program is one exception. It’s what allows companies to create custom apps not meant to be downloaded publicly, like an iPad app for signing guests into a corporate office. But Facebook used this program for a consumer research app, which Apple says violates its rules. “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple,” a spokesperson said in a statement. “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” Facebook didn’t respond to a request for comment.

Facebook needed to bypass Apple’s usual policies because its Research app is particularly invasive. First, it requires users to install what is known as a “root certificate.” This lets Facebook look at much of your browsing history and other network data, even if it’s encrypted. The certificate is like a shape-shifting passport—with it, Facebook can pretend to be almost anyone it wants.

To use a nondigital analogy, Facebook not only intercepted every letter participants sent and received, it also had the ability to open and read them. All for $20 a month!

Facebook’s latest privacy scandal is a good reminder to be wary of mobile apps that aren’t available for download in official app stores. It’s easy to overlook how much of your information might be collected, or to accidentally install a malicious version of Fortnite, for instance. VPNs can be great privacy tools, but many free ones sell their users’ data in order to make money. Before downloading anything, especially an app that promises to earn you some extra cash, it’s always worth taking another look at the risks involved.

 

2019 Top 10 IT Issues — from educause.edu

2019 Reveals Focus on “Student Genome”
In 2019, after a decade of preparing, higher education stands on a threshold. A new era of technology is ushering in myriad opportunities to apply data that supports and advances our higher ed mission. This threshold is similar to the one science stood on in the late 20th century: the prospect of employing technology to put genetic information to use meaningfully and ethically. Much in the same way, higher education must first “sequence” the data before we can apply it with any reliability or precision.

Our focus in 2019, then, is to organize, standardize, and safeguard data before applying it to our most pressing priority: student success.

The issues cluster into three themes:

  • Empowered Students: In their drive to improve student outcomes, institutions are increasingly focused on individual students, on their life circumstances, and on their entire academic journey. Leaders are relying on analytics and technology to make progress. Related issues: 2 and 4
  • Trusted Data: This is the work of the Student Genome Project, where the “sequencing” is taking place. Institutions are collecting, securing, integrating, and standardizing data and preparing the institution to use data meaningfully and ethically. Related issues: 1, 3, 5, 6 and 8
  • 21st Century Business Strategies: This is the leadership journey, in which institutions address today’s funding challenges and prepare for tomorrow’s more competitive ecosystem. Technology is now embedded into teaching and learning, research, and business operations and so must be embedded into the institutional strategy and business model. Related issues: 7, 9 and 10

 

 

 

 

The information below is per Laura Kelley (w/ Page 1 Solutions)


As you know, Apple has shut down Facebook’s ability to distribute internal iOS apps. The shutdown comes following news that Facebook has been using Apple’s program for internal app distribution to track teenage customers for “research.”

Dan Goldstein is the president and owner of Page 1 Solutions, a full-service digital marketing agency. He manages the needs of clients along with the need to ensure protection of their consumers, which has become one of the top concerns from clients over the last year. Goldstein is also a former attorney so he balances the marketing side with the legal side when it comes to protection for both companies and their consumers. He says while this is another blow for Facebook, it speaks volumes for Apple and its concern for consumers,

“Facebook continues to demonstrate that it does not value user privacy. The most disturbing thing about this news is that Facebook knew that its app violated Apples terms of service and continued to distribute the app to consumers after it was banned from the App Store. This shows, once again, that Facebook doesn’t value user privacy and goes to great lengths to collect private behavioral data to give it a competitive advantage.The FTC is already investigating Facebook’s privacy policies and practices.As Facebook’s efforts to collect and use private data continue to be exposed, it risks losing market share and may prompt additional governmental investigations and regulation,” Goldstein says.

“One positive that comes out of this story is that Apple seems to be taking a harder line on protecting user privacy than other tech companies. Apple has been making noises about protecting user privacy for several months. This action indicates that it is attempting to follow through on its promises,” Goldstein says.

 

 
© 2024 | Daniel Christian