From DSC:
The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.

The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.

We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.

LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.

What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.


Some relevant items:

Navigating Attention-Driving Algorithms, Capturing the Premium of Proximity for Virtual Teams, & New AI Devices — from implactions.com by Scott Belsky

Excerpts (emphasis DSC):

No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithms powering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.

Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.

The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.

Our feeds are being sourced in ways that dramatically change the content we’re exposed to.

And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”

How do we know when we’ve been polarized? This is the most important question of the day.

Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.


 

 

OpenAI announces leadership transition — from openai.com
Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company. Search process underway to identify permanent successor.

Excerpt (emphasis DSC):

The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.


As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

From DSC:
I’m not here to pass judgment, but all of us on planet Earth should be at least concerned with this disturbing news.

AI is one of the most powerful set of emerging technologies on the planet right now. OpenAI is arguably the most powerful vendor/innovator/influencer/leader in that space. And Sam Altman is was the face of OpenAI — and arguably for AI itself. So this is a big deal.

What concerns me is what is NOT being relayed in this posting:

  • What was being hidden from OpenAI’s Board?
  • What else doesn’t the public know? 
  • Why is Greg Brockman stepping down as Chairman of the Board?

To whom much is given, much is expected.


Also related/see:

OpenAI CEO Sam Altman ousted, shocking AI world — from washingtonpost.com by Gerrit De Vynck and Nitasha Tiku
The artificial intelligence company’s directors said he was not ‘consistently candid in his communications with the board’

Altman’s sudden departure sent shock waves through the technology industry and the halls of government, where he had become a familiar presence in debates over the regulation of AI. His rise and apparent fall from tech’s top rung is one of the fastest in Silicon Valley history. In less than a year, he went from being Bay Area famous as a failed start-up founder who reinvented himself as a popular investor in small companies to becoming one of the most influential business leaders in the world. Journalists, politicians, tech investors and Fortune 500 CEOs alike had been clamoring for his attention.

OpenAI’s Board Pushes Out Sam Altman, Its High-Profile C.E.O. — from nytimes.com by Cade Metz

Sam Altman, the high-profile chief executive of OpenAI, who became the face of the tech industry’s artificial intelligence boom, was pushed out of the company by its board of directors, OpenAI said in a blog post on Friday afternoon.


From DSC:
Updates — I just saw these items

.
Sam Altman fired as CEO of OpenAI — from theverge.com by Jay Peters
In a sudden move, Altman is leaving after the company’s board determined that he ‘was not consistently candid in his communications.’ President and co-founder Greg Brockman has also quit.



 

OpenAI Is Slowly Killing Prompt Engineering With The Latest ChatGPT and DALL-E Updates — from artificialcorner.substack.com by
ChatGPT and DALL-E 3 now do most of the prompting for us. Does this mean the end of prompt engineering?

Prompt engineering is a must-have skill that any AI enthusiast should have … at least until OpenAI released GPTs and DALL-E 3.

OpenAI doesn’t want to force users to learn prompt engineering to get the most out of its tools.

It seems OpenAI’s goal is to make its tools as easy to use as possible allowing even non-tech people to create outstanding AI images and tailored versions of ChatGPT without learning prompting techniques or coding.

AI can now generate prompts for us, but is this enough to kill prompt engineering? To answer this, let’s see how good are these AI-generated prompts.

From DSC:
I agree with several others that prompt engineering will be drastically altered…for the majority of us, I wouldn’t spend a lot of time becoming a Prompt Engineer.


.


 

Be My Eyes AI offers GPT-4-powered support for blind Microsoft customers — from theverge.com by Sheena Vasani
The tech giant’s using Be My Eyes’ visual assistant tool to help blind users quickly resolve issues without a human agent.


From DSC:
Speaking of Microsoft and AI:

 

AI Pedagogy Project, metaLAB (at) Harvard
Creative and critical engagement with AI in education. A collection of assignments and materials inspired by the humanities, for educators curious about how AI affects their students and their syllabi

AI Guide
Focused on the essentials and written to be accessible to a newcomer, this interactive guide will give you the background you need to feel more confident with engaging conversations about AI in your classroom.


From #47 of SAIL: Sensemaking AI Learning — by George Siemens

Excerpt (emphasis DSC):

Welcome to Sensemaking, AI, and Learning (SAIL), a regular look at how AI is impacting education and learning.

Over the last year, after dozens of conferences, many webinars, panels, workshops, and many (many) conversations with colleagues, it’s starting to feel like higher education, as a system, is in an AI groundhog’s day loop. I haven’t heard anything novel generated by universities. We have a chatbot! Soon it will be a tutor! We have a generative AI faculty council! Here’s our list of links to sites that also have lists! We need AI literacy! My mantra over the last while has been that higher education leadership is failing us on AI in a more dramatic way than it failed us on digitization and online learning. What will your universities be buying from AI vendors in five years because they failed to develop a strategic vision and capabilities today?


AI + the Education System — from drphilippahardman.substack.com Dr. Philippa Hardman
The key to relevance, value & excellence?


The magic school of the future is one that helps students learn to work together and care for each other — from stefanbauschard.substack.com by Stefan Bauschard
AI is going to alter economic and professional structures. Will we alter the educational structures?

(e) What is really required is a significant re-organization of schooling and curriculum. At a meta-level, the school system is focused on developing the type of intelligence I opened with, and the economic value of that is going to rapidly decline.

(f). This is all going to happen very quickly (faster than any previous change in history), and many people aren’t paying attention.  AI is already here.


 

9 Tips for Using AI for Learning (and Fun!) — from edutopia.org by Daniel Leonard; via Donna Norton on X/Twitter
These innovative, AI-driven activities will help you engage students across grade levels and subject areas.

Here are nine AI-based lesson ideas to try across different grade levels and subject areas.

ELEMENTARY SCHOOL

AI-generated Animated Drawing of artwork

Courtesy of Meta AI Research
A child’s drawing (left) and animations created with Animated Drawings.

.

1. Bring Student Drawings to Life: Young kids love to sketch, and AI can animate their sketches—and introduce them to the power of the technology in the process.

HIGH SCHOOL

8. Speak With AI in a Foreign Language: When learning a new language, students might feel self-conscious about making mistakes and avoid practicing as much as they should.


Though not necessarily about education, also see:

How I Use AI for Productivity — from wondertools.substack.com by Jeremy Caplan
In this Wonder Tools audio post I share a dozen of my favorite AI tools

From DSC:
I like Jeremy’s mentioning the various tools that he used in making this audio post:

 

Where a developing, new kind of learning ecosystem is likely headed [Christian]

From DSC:
As I’ve long stated on the Learning from the Living [Class]Room vision, we are heading toward a new AI-empowered learning platform — where humans play a critically important role in making this new learning ecosystem work.

Along these lines, I ran into this site out on X/Twitter. We’ll see how this unfolds, but it will be an interesting space to watch.

Project Chiron's vision: Our vision for education Every child will soon have a super-intelligent AI teacher by their side. We want to make sure they instill a love of learning in children.


From DSC:
This future learning platform will also focus on developing skills and competencies. Along those lines, see:

Scale for Skills-First — from the-job.beehiiv.com by Paul Fain
An ed-tech giant’s ambitious moves into digital credentialing and learner records.

A Digital Canvas for Skills
Instructure was a player in the skills and credentials space before its recent acquisition of Parchment, a digital transcript company. But that $800M move made many observers wonder if Instructure can develop digital records of skills that learners, colleges, and employers might actually use broadly.

Ultimately, he says, the CLR approach will allow students to bring these various learning types into a coherent format for employers.

Instructure seeks a leadership role in working with other organizations to establish common standards for credentials and learner records, to help create consistency. The company collaborates closely with 1EdTech. And last month it helped launch the 1EdTech TrustEd Microcredential Coalition, which aims to increase quality and trust in digital credentials.

Paul also links to 1EDTECH’s page regarding the Comprehensive Learning Record

 

The new apprenticeships — from jordanfurlong.substack.com by Jordan Furlong
Several American states are rewriting the rules of lawyer licensure and bringing the US into line with a key element of lawyer formation worldwide: supervised practice.

Change comes so gradually and fitfully to the legal sector that when something truly revolutionary happens — an actual turning point with an identifiable real-world impact — we have to mark the occasion. One such revolution broke out in the United States last week, opening up fantastic new possibilities for Americans who want to become lawyers.

The Oregon Supreme Court approved a new licensure program that does not require passage of a traditional written bar exam. After graduating from law school, aspiring Oregon lawyers can complete 675 hours of paid legal work under the supervision of an experienced attorney, assembling a portfolio of legal work to be assessed by bar admission officials. Candidates must submit eight samples of legal writing, take the lead in at least two initial client interviews or client counseling sessions, and oversee two negotiations, among other requirements.

Jordan mentions what’s going on in several other states including:

  • Utah
  • Washington
  • Minnesota
  • Nevada
  • California
  • Massachusetts
  • South Dakota

From DSC:
The Bar Exam doesn’t have a good reputation for actually helping get someone ready to practice law. So this is huge news indeed! The U.S. needs more people/specialists at the legal table moving forward. The items Jordan relays in this posting are a huge step forward in making that a reality.


For other innovations within the legal realm, see:

LawSchoolAi — from youtube.com

Picture this: A world where anyone can unlock the doors to legal expertise, no matter their background or resources. Introducing Law School AI – the game-changing platform turning this vision into reality. Our mission? To make legal education accessible, affordable, and tailored to every learner’s unique style, by leveraging the power of artificial intelligence.

As a trailblazing edtech company, Law School AI fuses cutting-edge AI technology with modern pedagogical techniques to craft a personalized, immersive, and transformative learning experience. Our platform shatters boundaries, opening up equal opportunities for individuals from all walks of life to master the intricacies of law.

Embrace a new era of legal education with Law School AI, where the age-old law school experience is reimagined as a thrilling, engaging, and interactive odyssey. Welcome to the future of legal learning.

 

 

 


From GPTs (pt. 3) — from theneurondaily.com by Noah Edelman

BTW, here are a few GPTs worth checking out today:

  • ConvertAnything—convert images, audio, videos, PDFs, files, & more.
  • editGPT—edit any writing (like Grammarly inside ChatGPT).
  • Grimoire—a coding assistant that helps you build anything!

Some notes from Dan Fitzpatrick – The AI Educator:

Custom GPT Bots:

  • These could help with the creation of interactive learning assistants, aligned with curricula.
  • They can be easily created with natural language programming.
  • Important to note users must have a ChatGPT Plus paid account

Custom GPT Store:

  • Marketplace for sharing and accessing educational GPT tools created by other teachers.
  • A store could offer access to specialised tools for diverse learning needs.
  • A store could enhance teaching strategies when accessing proven, effective GPT applications.

From DSC:
I appreciate Dan’s potential menu of options for a child’s education:

Monday AM: Sports club
Monday PM: Synthesis Online School AI Tutor
Tuesday AM: Music Lesson
Tuesday PM: Synthesis Online School Group Work
Wednesday AM: Drama Rehearsal
Wednesday PM: Synthesis Online School AI Tutor
Thursday AM: Volunteer work
Thursday PM: Private study
Friday AM: Work experience
Friday PM: Work experience

Our daughter has special learning needs and this is very similar to what she is doing. 

Also, Dan has a couple of videos out here at Google for Education:



Tuesday’s AI Ten for Educators (November 14) — from stefanbauschard.substack.com by Stefan Bauschard
Ten AI developments for educators to be aware of

Two boxes. In my May Cottesmore presentation, I put up two boxes:

(a) Box 1 — How educators can use AI to do what they do now (lesson plans, quizzes, tests, vocabulary lists, etc.)

(b) Box 2 — How the education system needs to change because, in the near future (sort of already), everyone is going to have multiple AIs working with them all day, and the premium on intelligence, especially “knowledge-based” intelligence, is going to decline rapidly. It’s hard to think that significant changes in the education system won’t be needed to accommodate that change.

There is a lot of focus on preparing educators to work in Box 1, which is important, if for no other reason than that they can see the power of even the current but limited technologies, but the hard questions are starting to be about Box 2. I encourage you to start those conversations, as the “ed tech” companies already are, and they’ll be happy to provide the answers and the services if you don’t want to.

Practical suggestions: Two AI teams in your institution. Team 1 works on Box A and Team 2 works on Box B.

 

The Beatles’ final song is now streaming thanks to AI — from theverge.com by Chris Welch
Machine learning helped Paul McCartney and Ringo Starr turn an old John Lennon demo into what’s likely the band’s last collaborative effort.


Scientists excited by AI tool that grades severity of rare cancer — from bbc.com by Fergus Walsh

Artificial intelligence is nearly twice as good at grading the aggressiveness of a rare form of cancer from scans as the current method, a study suggests.

By recognising details invisible to the naked eye, AI was 82% accurate, compared with 44% for lab analysis.

Researchers from the Royal Marsden Hospital and Institute of Cancer Research say it could improve treatment and benefit thousands every year.

They are also excited by its potential for spotting other cancers early.


Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving — from venturebeat.com by Michael Nuñez

Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn.

The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week.

Also from Michael Nuñez at venturebeat.com, see:


GPTs for all, AzeemBot; conspiracy theorist AI; big tech vs. academia; reviving organs ++448 — from exponentialviewco by Azeem Azhar and Chantal Smith


Personalized A.I. Agents Are Here. Is the World Ready for Them? — from ytimes.com by Kevin Roose (behind a paywall)

You could think of the recent history of A.I. chatbots as having two distinct phases.

The first, which kicked off last year with the release of ChatGPT and continues to this day, consists mainly of chatbots capable of talking about things. Greek mythology, vegan recipes, Python scripts — you name the topic and ChatGPT and its ilk can generate some convincing (if occasionally generic or inaccurate) text about it.

That ability is impressive, and frequently useful, but it is really just a prelude to the second phase: artificial intelligence that can actually do things. Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.”


From DSC:
Very cool!


Nvidia Stock Jumps After Unveiling of Next Major AI Chip. It’s Bad News for Rivals. — from barrons.com

On Monday, Nvidia (ticker: NVDA) announced its new H200 Tensor Core GPU. The chip incorporates 141 gigabytes of memory and offers up to 60% to 90% performance improvements versus its current H100 model when used for inference, or generating answers from popular AI models.

From DSC:
The exponential curve seems to be continuing — 60% to 90% performance improvements is a huge boost in performance.

Also relevant/see:


The 5 Best GPTs for Work — from the AI Exchange

Custom GPTs are exploding, and we wanted to highlight our top 5 that we’ve seen so far:

 

MIT Technology Review — Big problems that demand bigger energy. — from technologyreview.com by various

Technology is all about solving big thorny problems. Yet one of the hardest things about solving hard problems is knowing where to focus our efforts. There are so many urgent issues facing the world. Where should we even begin? So we asked dozens of people to identify what problem at the intersection of technology and society that they think we should focus more of our energy on. We queried scientists, journalists, politicians, entrepreneurs, activists, and CEOs.

Some broad themes emerged: the climate crisis, global health, creating a just and equitable society, and AI all came up frequently. There were plenty of outliers, too, ranging from regulating social media to fighting corruption.

MIT Technology Review interviews many people to weigh in on the underserved issues at the intersections of technology and society.

 

How ChatGPT changed my approach to learning — from wondertools.substack.com Jeremy Caplan and Frank Andrade
A guest contributor tutored himself with AI

Excerpt:

Frank: ChatGPT has changed how I learn and practice new things every day.

  • I use ChatGPT not only to fix my mistakes, but also to learn from them.
  • I use ChatGPT Voice to explore new topics, simulate job interviews, and practice foreign languages.
  • You can even use ChatGPT Vision to learn from images!

Here’s how to use AI to enhance your learning.

 


The legal world in 10 years (if we’re really lucky) — from jordanfurlong.substack.com by Jordan Furlong
Here are eight predictions for how the legal sector will be different and mostly better in 2033. If you have an alternative vision, there’s one sure way to prove me wrong.

What part are you going to play in determining the future? This isn’t a spectator sport or a video game. “The future” will be what you (and everyone else) make it by your decisions, commitments, sacrifices, and leadership — or, equally, by your inaction on all these fronts.

There’s a meme making the rounds that says, “People in time-travel movies are always afraid of committing one tiny action in the past, because it might completely change the present. But people in the present don’t seem to believe that committing one tiny action in the present could completely change the future.” I think that has it exactly right. The future we get is the future that you and I start making in the present, meaning today, right now.


Shadow AI: A Thorny Problem For Law Firms — from abovethelaw.com by Sharon D. Nelson, John W. Simek, and Michael C. Maschke
Its use is often unknown to a law firm’s IT or security group.


 

Excerpt (emphasis DSC):

For the first time, a physical neural network has successfully been shown to learn and remember “on the fly,” in a way inspired by and similar to how the brain’s neurons work.


Also see:

Nanowire ‘brain’ learns like humans! — from tech.therundown.ai by Rowan Cheung

Excerpt (emphasis DSC):

The Rundown: University of Sydney researchers have created a “brain-like” nanowire network capable of learning and remembering in real-time, similar to that of human brain function.

The details:

  • The nanowire neural network self-organizes into patterns, functioning like the brain’s synapses by responding to electrical currents.
 

A future-facing minister, a young inventor and a shared vision: An AI tutor for every student — from news.microsoft.com by Chris Welsch

The Ministry of Education and Pativada see what has become known as the U.A.E. AI Tutor as a way to provide students with 24/7 assistance as well as help level the playing field for those families who cannot afford a private tutor. At the same time, the AI Tutor would be an aid to teachers, they say. “We see it as a tool that will support our teachers,” says Aljughaiman. “This is a supplement to classroom learning.”

If everything goes according to plan, every student in the United Arab Emirates’ school system will have a personal AI tutor – that fits in their pockets.

It’s a story that involves an element of coincidence, a forward-looking education minister and a tech team led by a chief executive officer who still lives at home with his parents.

In February 2023, the U.A.E.’s education minister, His Excellency Dr. Ahmad Belhoul Al Falasi, announced that the ministry was embracing AI technology and pursuing the idea of an AI tutor to help Emirati students succeed. And he also announced that the speech he presented had been written by ChatGPT. “We should not demonize AI,” he said at the time.



Fostering deep learning in humans and amplifying our intelligence in an AI World — from stefanbauschard.substack.com by Stefan Bauschard
A free 288-page report on advancements in AI and related technology, their effects on education, and our practical support for AI-amplified human deep learning

Six weeks ago, Dr. Sabba Quidwai and I accidentally stumbled upon an idea to compare the deep learning revolution in computer science to the mostly lacking deep learning efforts in education (Mehta & Fine). I started writing, and as these things often go with me, I thought there were many other things that would be useful to think through and for educators to know, and we ended up with this 288-page report.

***

Here’s an abstract from that report:

This report looks at the growing gap between the attention paid to the development of intelligence in machines and humans. While computer scientists have made great strides in developing human intelligence capacities in machines using deep learning technologies, including the abilities of machines to learn on their own, a significant part of the education system has not kept up with developing the intelligence capabilities in people that will enable them to succeed in the 21st century. Instead of fully embracing pedagogical methods that place primary emphasis on promoting collaboration, critical thinking, communication, creativity, and self-learning through experiential, interdisciplinary approaches grounded in human deep learning and combined with current technologies, a substantial portion of the educational system continues to heavily rely on traditional instructional methods and goals. These methods and goals prioritize knowledge acquisition and organization, areas in which machines already perform substantially better than people.

Also from Stefan Bauschard, see:

  • Debating in the World of AI
    Performative assessment, learning to collaborate with humans and machines, and developing special human qualities

13 Nuggets of AI Wisdom for Higher Education Leaders — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable AI Guidance for Higher Education Leaders

Incentivize faculty AI innovation with AI. 

Invest in people first, then technology. 

On teaching, learning, and assessment. AI has captured the attention of all institutional stakeholders. Capitalize to reimagine pedagogy and evaluation. Rethink lectures, examinations, and assignments to align with workforce needs. Consider incorporating Problem-Based Learning, building portfolios and proof of work, and conducting oral exams. And use AI to provide individualized support and assess real-world skills.

Actively engage students.


Some thoughts from George Siemens re: AI:

Sensemaking, AI, and Learning (SAIL), a regular look at how AI is impacting learning.

Our education system has a uni-dimensional focus: learning things. Of course, we say we care about developing the whole learner, but the metrics that matter (grade, transcripts) that underpin the education system are largely focused on teaching students things that have long been Google-able but are now increasingly doable by AI. Developments in AI matters in ways that calls into question large parts of what happens in our universities. This is not a statement that people don’t need to learn core concepts and skills. My point is that the fulcrum of learning has shifted. Knowing things will continue to matter less and less going forward as AI improves its capabilities. We’ll need to start intentionally developing broader and broader attributes of learners: metacognition, wellness, affect, social engagement, etc. Education will continue to shift toward human skills and away from primary assessment of knowledge gains disconnected from skills and practice and ways of being.


AI, the Next Chapter for College Librarians — from insidehighered.com by Lauren Coffey
Librarians have lived through the disruptions of fax machines, websites and Wikipedia, and now they are bracing to do it again as artificial intelligence tools go mainstream: “Maybe it’s our time to shine.”

A few months after ChatGPT launched last fall, faculty and students at Northwestern University had many questions about the building wave of new artificial intelligence tools. So they turned to a familiar source of help: the library.

“At the time it was seen as a research and citation problem, so that led them to us,” said Michelle Guittar, head of instruction and curriculum support at Northwestern University Libraries.

In response, Guittar, along with librarian Jeanette Moss, created a landing page in April, “Using AI Tools in Your Research.” At the time, the university itself had yet to put together a comprehensive resource page.


From Dr. Nick Jackson’s recent post on LinkedIn: 

Last night the Digitech team of junior and senior teachers from Scotch College Adelaide showcased their 2023 experiments, innovation, successes and failures with technology in education. Accompanied by Student digital leaders, we saw the following:

  •  AI used for languagelearning where avatars can help with accents
  • Motioncapture suits being used in mediastudies
  • AI used in assessment and automatic grading of work
  • AR used in designtechnology
  • VR used for immersive Junior school experiences
  • A teacher’s AI toolkit that has changed teaching practice and workflow
  • AR and the EyeJack app used by students to create dynamic art work
  • VR use in careers education in Senior school
  • How ethics around AI is taught to Junior school students from Year 1
  • Experiments with MyStudyWorks

Almost an Agent: What GPTs can do — from oneusefulthing.org by Ethan Mollick

What would a real AI agent look like? A simple agent that writes academic papers would, after being given a dataset and a field of study, read about how to compose a good paper, analyze the data, conduct a literature review, generate hypotheses, test them, and then write up the results, all without intervention. You put in a request, you get a Word document that contains a draft of an academic paper.

A process kind of like this one:


What I Learned From an Experiment to Apply Generative AI to My Data Course — from edsurge.com by Wendy Castillo

As an educator, I have a duty to remain informed about the latest developments in generative AI, not only to ensure learning is happening, but to stay on top of what tools exist, what benefits and limitations they have, and most importantly, how students might be using them.

However, it’s also important to acknowledge that the quality of work produced by students now requires higher expectations and potential adjustments to grading practices. The baseline is no longer zero, it is AI. And the upper limit of what humans can achieve with these new capabilities remains an unknown frontier.


Artificial Intelligence in Higher Education: Trick or Treat? — from tytonpartners.com by Kristen Fox and Catherine Shaw
.

Two components of AI -- generative AI and predictive AI

 
© 2024 | Daniel Christian