From DSC: Google hopes that this personalized AI/app will help people with their note-taking, thinking, brainstorming, learning, and creating.
It reminds me of what Derek Bruff was just saying in regards to Top Hat’s Ace product being able to work with a much narrower set of information — i.e., a course — and to be almost like a personal learning assistant for the course you are taking. (As Derek mentions, this depends upon how extensively one uses the CMS/LMS in the first place.)
This week on the Midjourney subreddit, a user named Theblasian35 wrote: “Made an Adidas AI spec commercial during my coffee break.”
Specifics on how it was made aside, the video gets at something bigger: we now live in a world where someone can fairly easily spin up a gorgeous, professional-grade commercial—all using affordable, accessible, intuitive online tools. What does this mean for multi-million-dollar ad budgets?
From DSC: Like someone said, that must have been the world’s longest coffee break. 🙂
Programmable medicines. AI tools for kids. We asked
over 40 partners across a16z to preview one big idea
they believe will drive innovation in 2024.
Narrowly Tailored, Purpose-Built AI In 2024, I predict we’ll see narrower AI solutions. While ChatGPT may be a great general AI assistant, it’s unlikely to “win” for every task. I expect we’ll see an AI platform purpose-built for researchers, a writing generation tool targeted for journalists, and a rendering platform specifically for designers, to give just a few examples.
Over the longer term, I think the products people use on an everyday basis will be tailored to their use cases — whether this is a proprietary underlying model or a special workflow built around it. These companies will have the chance to “own” the data and workflow for a new era of technology; they’ll do this by nailing one category, then expanding. For the initial product, the narrower the better.
— via Olivia Moore, who focuses on marketplace startups
Today, we’re a step closer to this vision as we introduce Gemini, the most capable and general model we’ve ever built.
Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research. It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video.
So, in many ways, ChatGPT and its friends are far from as intelligent as a human; they do not have “general” intelligence (AGI).
But this will not last for long. The debate about ProjectQ aside, AIs with the ability to engage in high-level reasoning, plan, and have long-term memory are expected in the next 2–3 years. We are already seeing AI agents that are developing the ability to actautonomously and collaborate to a degree. Once AIs can reason and plan, acting autonomously and collaborating will not be a challenge.
ChatGPT is winning the future — but what future is that?— from theverge.com by David Pierce OpenAI didn’t mean to kickstart a generational shift in the technology industry. But it did. Now all we have to decide is where to go from here.
We don’t know yet if AI will ultimately change the world the way the internet, social media, and the smartphone did. Those things weren’t just technological leaps — they actually reorganized our lives in fundamental and irreversible ways. If the final form of AI is “my computer writes some of my emails for me,” AI won’t make that list. But there are a lot of smart people and trillions of dollars betting that’s the beginning of the AI story, not the end. If they’re right, the day OpenAI launched its “research preview” of ChatGPT will be much more than a product launch for the ages. It’ll be the day the world changed, and we didn’t even see it coming.
AI is overhyped” — from theneurondaily.com by Pete Huang & Noah Edelman
If you’re feeling like AI is the future, but you’re not sure where to start, here’s our advice for 2024 based on our convos with business leaders:
Start with problems – Map out where your business is spending time and money, then ask if AI can help. Don’t do AI to say you’re doing AI.
Model the behavior – Teams do better in making use of new tools when their leadership buys in. Show them your support.
Do what you can, wait for the rest – With AI evolving so fast, “do nothing for now” is totally valid. Start with what you can do today (accelerating individual employee output) and keep up-to-date on the rest.
Google has unveiled a new artificial intelligence model that it claims outperforms ChatGPT in most tests and displays “advanced reasoning” across multiple formats, including an ability to view and mark a student’s physics homework.
The model, called Gemini, is the first to be announced since last month’s global AI safety summit, at which tech firms agreed to collaborate with governments on testing advanced systems before and after their release. Google said it was in discussions with the UK’s newly formed AI Safety Institute over testing Gemini’s most powerful version, which will be released next year.
More Chief Online Learning Officers Step Up to Senior Leadership Roles In 2024, I think we will see more Chief Online Learning Officers (COLOs) take on more significant roles and projects at institutions.
In recent years, we have seen many COLOs accept provost positions. The typical provost career path that runs up through the faculty ranks does not adequately prepare leaders for the digital transformation occurring in postsecondary education.
As we’ve seen with the professionalization of the COLO role, in general, these same leaders proved to be incredibly valuable during the pandemic due to their unique skills: part academic, part entrepreneur, part technologist, COLOs are unique in higher education. They sit at the epicenter of teaching, learning, technology, and sustainability. As institutions are evolving, look for more online and professional continuing leaders to take on more senior roles on campuses.
Julie Uranis, Senior Vice President, Online and Strategic Initiatives, UPCEA
6. ChatGPT’s hype will fade, as a new generation of tailor-made bots rises up
11. We’ll finally turn the corner on teacher pay in 2024
21. Employers will combat job applicants’ use of AI with…more AI
31. Universities will view the creator economy as a viable career path
DSC: I’m a bit confused this am as I’m seeing multiple — but different – references to “Q” and what it is. It seems to be at least two different things: 1) OpenAI’s secret project Q* and 2) Amazon Q, a new type of generative artificial intelligence-powered assistant
We need to start aligning the educational system with a world where humans live with machines that have intelligence capabilities that approximate their own.
Today’s freshmen *may* graduate into a world where AIs have at least similar intelligence abilities to humans. Today’s 1st graders *probably* will.
Efforts need to be made to align the educational system with a world where machines will have intelligence capabilities similar to those of humans.
AWS Announces Amazon Q to Reimagine the Future of Work— from press.aboutamazon.com New type of generative AI-powered assistant, built with security and privacy in mind, empowers employees to get answers to questions, solve problems, generate content, and take actions using the data and expertise found at their company
LAS VEGAS–(BUSINESS WIRE)–At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), today announced Amazon Q, a new type of generative artificial intelligence-(AI) powered assistant that is specifically for work and can be tailored to a customer’s business. Customers can get fast, relevant answers to pressing questions, generate content, and take actions—all informed by a customer’s information repositories, code, and enterprise systems. Amazon Q provides information and advice to employees to streamline tasks, accelerate decision making and problem solving, and help spark creativity and innovation at work.
What: We’re taking the first steps in Bard’s ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.
Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.
I am not sure who said it first, but there are only two ways to react to exponential change: too early or too late. Today’s AIs are flawed and limited in many ways. While that restricts what AI can do, the capabilities of AI are increasing exponentially, both in terms of the models themselves and the tools these models can use. It might seem too early to consider changing an organization to accommodate AI, but I think that there is a strong possibility that it will quickly become too late.
From DSC: Readers of this blog have seen the following graphic for several years now, but there is no question that we are in a time of exponential change. One would have had an increasingly hard time arguing the opposite of this perspective during that time.
Nvidia’s results surpassed analysts’ projections for revenue and income in the fiscal fourth quarter.
Demand for Nvidia’s graphics processing units has been exceeding supply, thanks to the rise of generative artificial intelligence.
Nvidia announced the GH200 GPU during the quarter.
Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:
Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
Revenue: $18.12 billion, vs. $16.18 billion expected
Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.
DC: Anyone surprised? This is why the U.S. doesn’t want high-powered chips going to China. History repeats itself…again. The ways of the world/power continue on.
Pentagon’s AI initiatives accelerate hard decisions on lethal autonomous weapons https://t.co/PTDmJugiE2
From DSC: The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.
The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.
We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.
LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.
What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.
No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithmspowering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.
… Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.
…
The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.
Our feeds are being sourced in ways that dramatically change the content we’re exposed to.
And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”
…
How do we know when we’ve been polarized? This is the most important question of the day.
…
Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.
Despite the hype and big promises about AI, if it is used correctly, could it be the differentiator that sets good legal professionals apart from the pack? Stephen Embry offers a good argument for this in the latest episode.
Stephen is a long-time attorney and the legal tech aficionado behind the TechLaw Crossroads blog– a great resource for practical and real-world insight about legal tech and how technology is impacting the practice of law. Embry emphasizes that good lawyers will embrace artificial intelligence to increase efficiency and serve their clients better, leaving more time for strategic thinking and advisory roles.
AI Pedagogy Project, metaLAB (at) Harvard Creative and critical engagement with AI in education. A collection of assignments and materials inspired by the humanities, for educators curious about how AI affects their students and their syllabi
AI Guide
Focused on the essentials and written to be accessible to a newcomer, this interactive guide will give you the background you need to feel more confident with engaging conversations about AI in your classroom.
From #47 of SAIL: Sensemaking AI Learning — by George Siemens
Excerpt (emphasis DSC):
Welcome to Sensemaking, AI, and Learning (SAIL), a regular look at how AI is impacting education and learning.
Over the last year, after dozens of conferences, many webinars, panels, workshops, and many (many) conversations with colleagues, it’s starting to feel like higher education, as a system, is in an AI groundhog’s day loop. I haven’t heard anything novel generated by universities. We have a chatbot! Soon it will be a tutor! We have a generative AI faculty council! Here’s our list of links to sites that also have lists! We need AI literacy! My mantra over the last while has been that higher education leadership is failing us on AI in a more dramatic way than it failed us on digitization and online learning. What will your universities be buying from AI vendors in five years because they failed to develop a strategic vision and capabilities today?
AI + the Education System — from drphilippahardman.substack.com Dr. Philippa Hardman The key to relevance, value & excellence?
(e) What is really required is a significant re-organization of schooling and curriculum. At a meta-level, the school system is focused on developing the type of intelligence I opened with, and the economic value of that is going to rapidly decline.
(f). This is all going to happen very quickly (faster than any previous change in history), and many people aren’t paying attention. AI is already here.
9 Tips for Using AI for Learning (and Fun!) — from edutopia.org by Daniel Leonard; via Donna Norton on X/Twitter These innovative, AI-driven activities will help you engage students across grade levels and subject areas.
Here are nine AI-based lesson ideas to try across different grade levels and subject areas.
ELEMENTARY SCHOOL
Courtesy of Meta AI Research
A child’s drawing (left) and animations created with Animated Drawings.
.
1. Bring Student Drawings to Life: Young kids love to sketch, and AI can animate their sketches—and introduce them to the power of the technology in the process.
HIGH SCHOOL
8. Speak With AI in a Foreign Language: When learning a new language, students might feel self-conscious about making mistakes and avoid practicing as much as they should.
Though not necessarily about education, also see:
How I Use AI for Productivity — from wondertools.substack.com by Jeremy Caplan In this Wonder Tools audio post I share a dozen of my favorite AI tools
From DSC: I like Jeremy’s mentioning the various tools that he used in making this audio post:
Adobe podcast for recording and removing background noise from the opening supplemental audio clip, and Adobe Mic check to gauge microphone positioning
From DSC: As I’ve long stated on the Learning from the Living [Class]Room vision, we are heading toward a new AI-empowered learning platform — where humans play a critically important role in making this new learning ecosystem work.
Along these lines, I ran into this site out on X/Twitter. We’ll see how this unfolds, but it will be an interesting space to watch.
From DSC: This future learning platform will also focus on developing skills and competencies. Along those lines, see:
Scale for Skills-First— from the-job.beehiiv.com by Paul Fain An ed-tech giant’s ambitious moves into digital credentialing and learner records.
A Digital Canvas for Skills
Instructure was a player in the skills and credentials space before its recent acquisition of Parchment, a digital transcript company. But that $800M move made many observers wonder if Instructure can develop digital records of skills that learners, colleges, and employers might actually use broadly.
…
Ultimately, he says, the CLR approach will allow students to bring these various learning types into a coherent format for employers.
Instructure seeks a leadership role in working with other organizations to establish common standards for credentials and learner records, to help create consistency. The company collaborates closely with 1EdTech. And last month it helped launch the 1EdTech TrustEd Microcredential Coalition, which aims to increase quality and trust in digital credentials.
These could help with the creation of interactive learning assistants, aligned with curricula.
They can be easily created with natural language programming.
Important to note users must have a ChatGPT Plus paid account
Custom GPT Store:
Marketplace for sharing and accessing educational GPT tools created by other teachers.
A store could offer access to specialised tools for diverse learning needs.
A store could enhance teaching strategies when accessing proven, effective GPT applications.
From DSC: I appreciate Dan’s potential menu of options for a child’s education:
Monday AM: Sports club
Monday PM: Synthesis Online School AI Tutor
Tuesday AM: Music Lesson
Tuesday PM: Synthesis Online School Group Work
Wednesday AM: Drama Rehearsal
Wednesday PM: Synthesis Online School AI Tutor
Thursday AM: Volunteer work
Thursday PM: Private study
Friday AM: Work experience
Friday PM: Work experience
Our daughter has special learning needs and this is very similar to what she is doing.
The future of generative AI in higher ed? How to be critical, pragmatic, and playful at once? Nothing like having someone you admire ask you hard questions in a friendly way. This Q&A with Leon Furze got me to take a stab at articulating some responses.https://t.co/IxreZYwWmv
— Anna Mills, annamillsoer.bsky.social, she/her (@EnglishOER) November 13, 2023
Two boxes. In my May Cottesmore presentation, I put up two boxes:
(a) Box 1 — How educators can use AI to do what they do now (lesson plans, quizzes, tests, vocabulary lists, etc.)
(b) Box 2 — How the education system needs to change because, in the near future (sort of already), everyone is going to have multiple AIs working with them all day, and the premium on intelligence, especially “knowledge-based” intelligence, is going to decline rapidly. It’s hard to think that significant changes in the education system won’t be needed to accommodate that change.
There is a lot of focus on preparing educators to work in Box 1, which is important, if for no other reason than that they can see the power of even the current but limited technologies, but the hard questions are starting to be about Box 2. I encourage you to start those conversations, as the “ed tech” companies already are, and they’ll be happy to provide the answers and the services if you don’t want to.
Practical suggestions: Two AI teams in your institution. Team 1 works on Box A and Team 2 works on Box B.