Also relevant/see:

We have moved from Human Teachers and Human Learners, as a diad to AI Teachers and AI Learners as a tetrad.


 

FBI, Pentagon helped research facial recognition for street cameras, drones — from washingtonpost.com by Drew Harwell
Internal documents released in response to a lawsuit show the government was deeply involved in pushing for face-scanning technology that could be used for mass surveillance

Excerpt:

The FBI and the Defense Department were actively involved in research and development of facial recognition software that they hoped could be used to identify people from video footage captured by street cameras and flying drones, according to thousands of pages of internal documents that provide new details about the government’s ambitions to build out a powerful tool for advanced surveillance.

From DSC:
This doesn’t surprise me. But it’s yet another example of opaqueness involving technology. And who knows to what levels our Department of Defense has taken things with AI, drones, and robotics.

 

You are not a parrot — from nymag.com by Elizabeth Weil and Emily M. Bender

You Are Not a Parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.

Excerpts:

A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”

Bender knows she’s no match for a trillion-dollar game changer slouching to life. But she’s out there trying. Others are trying too. LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.

 
 

Educator considerations for ChatGPT — from platform.openai.com; with thanks to Anna Mills for this resource

Excerpt:

Streamlined and personalized teaching
Some examples of how we’ve seen educators exploring how to teach and learn with tools like ChatGPT:

  • Drafting and brainstorming for lesson plans and other activities
  • Help with design of quiz questions or other exercises
  • Experimenting with custom tutoring tools
  • Customizing materials for different preferences (simplifying language, adjusting to different reading levels, creating tailored activities for different interests)
  • Providing grammatical or structural feedback on portions of writing
  • Use in upskilling activities in areas like writing and coding (debugging code, revising writing, asking for explanations)
  • Critique AI generated text

While several of the above draw on ChatGPT’s potential to be explored as a tool for personalization, there are risks associated with such personalization as well, including student privacy, biased treatment, and development of unhealthy habits. Before students use tools that offer these services without direct supervision, they and their educators should understand the limitations of the tools outlined below.

Also relevant/see:

Excerpt (emphasis DSC):
David Wiley wrote a thoughtful post on the ways in which AI and Large Language Models (LLMs) can “provide instructional designers with first drafts of some of the work they do.” He says “imagine you’re an instructional designer who’s been paired with a faculty member to create a course in microeconomics. These tools might help you quickly create first drafts of” learning outcomes, discussion prompts, rubrics, and formative assessment items.  The point is that LLMs can quickly generate rough drafts that are mostly accurate drafts, that humans can then “review, augment, and polish,” potentially shifting the work of instructional designers from authors to editors. The post is well worth your time.

The question that I’d like to spend some time thinking about is the following: What new knowledge, capacities, and skills do  instructional designers need in their role as editors and users of LLMs?

This resonated with me. Instructional Designer positions are starting to require AI and ML chops. I’m introducing my grad students to AI and ChatGPT this semester. I have an assignment based on it.

(This ain’t your father’s instructional design…)

Robert Gibson


 

From DSC:
Check out the items below. As with most technologies, there are likely going to be plusses & minuses regarding the use of AI in digital video, communications, arts, and music.



Also see:


Also somewhat relevant, see:

 

Police are rolling out new tech without knowing their effects on people — from The Algorithm by Melissa Heikkilä

Excerpt:

I got lucky—my encounter was with a drone in virtual reality as part of an experiment by a team from University College London and the London School of Economics. They’re studying how people react when meeting police drones, and whether they come away feeling more or less trusting of the police.

It seems obvious that encounters with police drones might not be pleasant. But police departments are adopting these sorts of technologies without even trying to find out.

“Nobody is even asking the question: Is this technology going to do more harm than good?” says Aziz Huq, a law professor at the University of Chicago, who is not involved in the research.

 

The incredible shrinking future of college — from vox.com by Kevin Carey

Excerpt:

The future looks very different in some parts of the country than in others, and will also vary among national four-year universities, regional universities like Ship, and community colleges. Grawe projects that, despite the overall demographic decline, demand for national four-year universities on the West Coast will increase by more than 7.5 percent between now and the mid-2030s. But in states like New York, Ohio, Michigan, Wisconsin, Illinois, and Louisiana, it will decline by 15 percent or more.

Higher ed’s eight-decade run of unbroken good fortune may be about to end.

Demand for regional four-year universities, per Grawe, will drop by at least 7.5 percent across New England, the mid-Atlantic, and Southern states other than Florida and Texas, with smaller declines in the Great Plains. Community colleges will be hit hard in most places other than Florida, which has a robust two-year system with a large Latino population.

The next generation of higher education leaders will take scarcity as a given and “return on investment” as both sales pitch and state of mind.

The decline of American higher education — from youtube.com by Bryan Alexander and Kevin Carey

 

Most Colleges Omit or Understate Net Costs in Financial-Aid Offers, Federal Watchdog Finds — from chronicle.com by Eric Hoover

Excerpt:

Nine out of 10 colleges either exclude or understate the net cost of attendance in their financial-aid offers to students, according to estimates published in a new report by the Government Accountability Office. The watchdog agency recommended that Congress consider legislation that would require institutions to provide “clear and standard information.”

The lack of clarity makes it hard for students to decide where to enroll and how much to borrow.

The report, published on Monday, paints a troubling picture of an industry that makes it difficult for consumers to understand the bottom line by presenting insufficient if not downright misleading information. Federal law does not require colleges to present financial-aid offers in a clear, consistent way to all students.

Higher ed faces ‘deteriorating’ outlook in 2023, Fitch says — from highereddive.com by Rick Seltzer

Dive Brief (excerpt):

  • U.S. higher education faces a stable but deteriorating credit outlook in 2023, Fitch Ratings said Thursday, taking a more pessimistic view of the sector’s future than it had at the same time last year.
  • Operating performance at colleges and universities will be pressured by enrollment, labor and wage challenges, according to the bond ratings agency. Colleges have been able to raise tuition slightly because of inflation, but additional revenue they generate generally isn’t expected to be enough to offset rising costs.

Merger Watch: Don’t wait too long to find a merger partner. Closure does not benefit anybody. — from highereddive.com by Ricardo Azziz
Leaders fail students, employees and communities when they embrace a strategy of hope in the face of overwhelming evidence.

Excerpt:

While not all institutions can (or should be) saved, most institutional closures reflect the failure of past governing boards to face the fiscal reality of their institution — and to plan accordingly and in a timely manner. Leaders should always consider and, if necessary, pursue potential partnerships, mergers, or consolidations before a school has exhausted its financial and political capital. The inability or unwillingness of many leaders to take such action is reflected in the fact that the number of institutional closures in higher education far outweighs the number of successful mergers.

In fact, the risk of closure can be predicted. In a prior analysis several coauthors and I reported on a number of risk factors predictive of closure, noting that most schools at risk for closure are small and financially fragile, with declining enrollment and limited resources to mount significant online programs. While there are many clear signs that a school is at risk for closure, the major challenge to mounting a response seems to be the unwillingness of institutional leaders to understand, face and act on these signs.

What can colleges learn from degrees awarded in the fast-shrinking journalism field? — from highereddive.com by Lilah Burke
Bachelor’s degrees offer solid payoffs, while grad programs post mixed returns, researchers find. But many students don’t go on to work in the field.

Excerpt:

Journalism jobs are hard to find. But it’s nice work when you can get it.

That’s the takeaway from a new report from the Georgetown University Center on Education and the Workforce on the payoff of journalism programs. An analysis of federal education and labor data reveals that journalism and communication bachelor’s degrees offer moderate payoff to their graduates, but only 15% of majors end up working in the field early in their careers. Newsroom employment has declined 26% since 2008, and researchers predict it will fall 3% over the next nine years.


Addendum on 12/10/22:

A Sectorwide Approach to Higher Ed’s Future — from insidehighered.com by Sylvia M. Burwell
Institutions must seek ways to differentiate themselves even as they work together to address common challenges facing all of higher education, writes Sylvia M. Burwell.

We have to think differently about the future of higher education. And rather than limit our work to what one type of institution or program can achieve, we should look across the entire higher education sector.

A sectorwide [insert DSC: system-wide] approach is needed because the economics of higher education are not going to hold.

To evolve our thinking on these questions, we should focus on the value proposition of higher education and market differentiation.

 

From DSC:
I’d like to thank Sarah Huibregtse for her post out on LinkedIn where she commented on and referenced the following item from Nicholas Thompson (CEO at The Atlantic):


Also relevant/see:


Also related/see the following item which I thank Sam DeBrule’s Machine Learnings newsletter for:


Also, somewhat related, see the following item that Julie Johnston mentioned out on LinkedIn:

Top 10 conversational AI trends for 2023 — from linkedin.com by Kane Simms and  Tim Holve, Tarren Corbett-Drummond, Arte Merritt, and Kevin Fredrick.

Excerpt:

In 2023, businesses will realise that, in order to get out of FAQ Land, they need to synchronise business systems together to deliver personalised transactional experiences for customers.

“We finally have the technologies to do all the things we imagined 10 years ago.”

 

10 Must Read Books for Learning Designers — from linkedin.com by Amit Garg

Excerpt:

From the 45+ #books that I’ve read in last 2 years here are my top 10 recommendations for #learningdesigners or anyone in #learninganddevelopment

Speaking of recommended books (but from a more technical perspective this time), also see:

10 must-read tech books for 2023 — from enterprisersproject.com by Katie Sanders (Editorial Team)
Get new thinking on the technologies of tomorrow – from AI to cloud and edge – and the related challenges for leaders

10 must-read tech books for 2023 -- from enterprisersproject.com by Katie Sanders

 

The Justice Gap: The Unmet Civil Legal Needs of Low-income Americans — from the Legal Services Corporation

Legal Services Corporation’s 2022 Justice Gap Report provides a comprehensive look at the differences between the civil legal needs of low-income Americans and the resources available to meet those needs. LSC’s study found that low-income Americans do not get the help they need for 92% of their civil legal problems, even though 74% of low-income households face at least one civil legal issue in a single year.

The consequences that result from a lack of appropriate counsel can be life-altering – low-income Americans facing civil legal problems can lose their homes, children and healthcare, among other things. Help can be hard to access, so LSC is working to bridge this “justice gap” by providing pro bono civil legal aid for those in need. Find out more about LSC’s work to ensure equal justice for all by tuning in to the rest of the Justice Gap video series.

For more information on the Justice Gap, visit https://justicegap.lsc.gov/.

Also relevant/see:

.

Legal Services Corporation’s 2022 Justice Gap Report provides a comprehensive look at the differences between the civil legal needs of low-income Americans and the resources available to meet those needs.

 
 

These are the most important AI trends, according to top AI experts — from nexxworks.com
Somewhat in the shadow of the (often) overhyped metaverse and Web3 paradigms, AI seems to be developing at great speed. That’s why we asked a group of top AI experts in our network to describe what they think are the most important trends, evolutions and areas of interest of the moment in that domain.

Excerpt:

All of them have different backgrounds and areas of expertise, but some patterns still emerged in their stories, several of them mentioning ethics, the impact on the climate (both positively and negatively), the danger of overhyping, the need for transparency and explainability, interdisciplinary collaborations, robots and the many challenges that still need to be overcome.

But let’s see what they have to say, shall we?

Also relevant/see:

AI IS REVOLUTIONIZING EVERY FIELD AND SCIENCE IS NO EXCEPTION — from dataconomy.com by KEREM GÜLEN

Table of Contents

  • Artificial intelligence in science
    • Artificial intelligence in science: Biology
    • Artificial intelligence in science: Physics
    • Artificial intelligence in science: Chemistry
  • AI in science and research
    • How is AI used in scientific research?
      • Protein structures can be predicted using genetic data
      • Recognizing how climate change affects cities and regions
      • Analyzing astronomical data
  • AI in science examples
    • Interpreting social history with archival data
    • Using satellite images to aid in conservation
    • Understanding complex organic chemistry
  • Conclusion

Also relevant/see:

  • How ‘Responsible AI’ Is Ethically Shaping Our Future — from learningsolutionsmag.com by Markus Bernhardt
    Excerpt:
    The PwC 2022 AI Business Survey finds that “AI success is becoming the rule, not the exception,” and, according to PwC US, published in the 2021 AI Predictions & 2021 Responsible AI Insights Report, “Responsible AI is the leading priority among industry leaders for AI applications in 2021, with emphasis on improving privacy, explainability, bias detection, and governance.”
  • Why you need an AI ethics committee — from enterprisersproject.com by Reid Blackman (requires providing email address to get the article)
 

“Unleash all this creativity”: Google AI’s breathtaking potential — from axios.com by Jennifer Kingson

Excerpt:

Google’s research arm on Wednesday showed off a whiz-bang assortment of artificial intelligence (AI) projects it’s incubating, aimed at everything from mitigating climate change to helping novelists craft prose.

Why it matters: AI has breathtaking potential to improve and enrich our lives — and comes with hugely worrisome risks of misuse, intrusion and malfeasance, if not developed and deployed responsibly.

Driving the news: The dozen-or-so AI projects that Google Research unfurled at a Manhattan media event are in various stages of development, with goals ranging from societal improvement (such as better health diagnoses) to pure creativity and fun (text-to-image generation that can help you build a 3D image of a skirt-clad monster made of marzipan).

The “1,000 Languages Initiative”: Google is building an AI model that will work with the world’s 1,000 most-spoken languages.

  • AI “can have immense social benefits” and “unleash all this creativity,” said Marian Croak, head of Google Research’s center of expertise on responsible AI.
  • “But because it has such a broad impact on people, the risk involved can also be very huge. And if we don’t get that right … it can be very destructive.”

    And as Axios’ Scott Rosenberg has written, society is only just beginning to grapple with the legal and ethical questions raised by AI’s new capacity to generate text and images.
 

11 The Lord detests dishonest scales,
    but accurate weights find favor with him.

From DSC:
I thought about this verse the other day as I opened up a brand-new box of cereal. The box made it look like I was getting a lot of cereal — making it look like a decent value for the price. But when I opened it up, it was about half full (I realize some of this occurs by pushing out the box as the contents settle, but come on!). In fairness, I realize that the amount of the cereal is written on the box, but the manufacture likely kept the size of the box the same but decreased the amount that they put within it. They kept the price the same, but changed the quantity sold.

This shrinkification of items seems to be happening more these days — as companies don’t want to change their prices, so they’ll change the amounts of their products that you get.

  • It just strikes me as yet another deception.
  • We BS each other too much.
  • We rip each other off too much.
  • We bury stuff in the fine print.
  • Our advertising is not always truthful — words are easy to say, and much harder to back up.
  • We treat people as though they just exist to make money off of. It’s like Philip Morris did to people for years, and it still occurs today with other companies.
  • In today’s perspective, people are to be competed against but not to be in relationships with. 

I hope that we can all build and offer solid products and services — while putting some serious quality into those things. Let’s make those things and offer those services as if we were making them for ourselves and/or our families. Let’s use “accurate weights.” And while we’re trying to do the right things, let’s aim to be in caring relationships with others.

 
© 2024 | Daniel Christian