AI-assisted job fraud is spiking — from thedeepview.co by Ian Krietzberg

A recent report published by the Identity Theft Resource Center (ITRC) found that data from 2023 shows “an environment where bad actors are more effective, efficient and successful in launching attacks. The result is fewer victims (or at least fewer victim reports), but the impact on individuals and businesses is arguably more damaging.”

One of these attacks involves fake job postings.

The details: The ITRC said that victim reports of job and employment scams spiked some 118% in 2023. These scams were primarily carried out through LinkedIn and other job search platforms.

    • The bad actors here would either create fake (but professional-looking) job postings, profiles and websites or impersonate legitimate companies, all with the hopes of landing victims to move onto the interview process.
    • These actors would then move the conversation onto a third-party messaging platform, and ask for identity verification information (driver’s licenses, social security numbers, direct deposit information, etc.).

Hypernatural — AI videos you can actually use. — via Jeremy Caplan’s Wonder Tools

Hypernatural is an AI video platform that makes it easy to create beautiful, ready-to share videos from anything. Stop settling for glitchy 3s generated videos and boring stock footage. Turn your ideas, scripts, podcasts and more into incredible short-form videos in minutes.


GPT-4o mini: advancing cost-efficient intelligence — from openai.com
Introducing our most cost-efficient small model

OpenAI is committed to making intelligence as broadly accessible as possible. Today, we’re announcing GPT-4o mini, our most cost-efficient small model. We expect GPT-4o mini will significantly expand the range of applications built with AI by making intelligence much more affordable. GPT-4o mini scores 82% on MMLU and currently outperforms GPT-41 on chat preferences in LMSYS leaderboard(opens in a new window). It is priced at 15 cents per million input tokens and 60 cents per million output tokens, an order of magnitude more affordable than previous frontier models and more than 60% cheaper than GPT-3.5 Turbo.

GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots).

Also see what this means from Ben’s Bites, The Neuron, and as The Rundown AI asserts:

Why it matters: While it’s not GPT-5, the price and capabilities of this mini-release significantly lower the barrier to entry for AI integrations — and marks a massive leap over GPT 3.5 Turbo. With models getting cheaper, faster, and more intelligent with each release, the perfect storm for AI acceleration is forming.


Nvidia: More AI Waves Are Taking Shape — from seekingalpha.com by Eric Sprague

Summary

  • Nvidia Corporation is transitioning from a GPU designer to an AI factory builder.
  • AI spending will continue to grow in healthcare, government, and robotics.
  • CEO Jensen Huang says the AI robot industry could be bigger than the auto and consumer electronics industries combined.

Byte-Sized Courses: NVIDIA Offers Self-Paced Career Development in AI and Data Science — from blogs.nvidia.com by Andy Bui
Industry experts gather to share advice on starting a career in AI, highlighting technical training and certifications for career growth.

 

From DSC:
I realize I lose a lot of readers on this Learning Ecosystems blog because I choose to talk about my faith and integrate scripture into these postings. So I have stayed silent on matters of politics — as I’ve been hesitant to lose even more people. But I can no longer stay silent re: Donald Trump.

I, too, fear for our democracy if Donald Trump becomes our next President. He is dangerous to our democracy.

Also, I can see now how Hitler came to power.

And look out other countries that Trump doesn’t like. He is dangerous to you as well.

He doesn’t care about the people of the United States (nor any other nation). He cares only about himself and gaining power. Look out if he becomes our next president. 


From Stefan Bauschard:

Unlimited Presidential power. According to Trump vs the US, the “President may not be prosecuted for exercising his core constitutional powers, and he is entitled to at least presumptive immunity from prosecution for his official acts.” Justice Sotomayor says this makes the President a “king.” This power + surveillance + AGI/autonomous weapons mean the President is now the most powerful king in the history of the world.

Democracy is only 200 years old.

 

A Right to Warn about Advanced Artificial Intelligence — from righttowarn.ai

We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity.

We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks [123], as have governments across the world [456] and other AI experts [789].

We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public. However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.

 

Instructors as Innovators: a Future-focused Approach to New AI Learning Opportunities, With Prompts –from papers.ssrn.com by Ethan R. Mollick and Lilach Mollick

Abstract

This paper explores how instructors can leverage generative AI to create personalized learning experiences for students that transform teaching and learning. We present a range of AI-based exercises that enable novel forms of practice and application including simulations, mentoring, coaching, and co-creation. For each type of exercise, we provide prompts that instructors can customize, along with guidance on classroom implementation, assessment, and risks to consider. We also provide blueprints, prompts that help instructors create their own original prompts. Instructors can leverage their content and pedagogical expertise to design these experiences, putting them in the role of builders and innovators. We argue that this instructor-driven approach has the potential to democratize the development of educational technology by enabling individual instructors to create AI exercises and tools tailored to their students’ needs. While the exercises in this paper are a starting point, not a definitive solutions, they demonstrate AI’s potential to expand what is possible in teaching and learning.

 

Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly

Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.

It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.

The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.

From that large paper:

Key questions for the ethical and societal analysis of advanced AI assistants include:

  1. What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
  2. What capabilities would an advanced AI assistant have? How capable could these assistants be?
  3. What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
  4. Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
  5. What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
  6. What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
  7. What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
  8. How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
  9. Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
 

Addressing equity and ethics in artificial intelligence — from apa.org by Zara Abrams
Algorithms and humans both contribute to bias in AI, but AI may also hold the power to correct or reverse inequities among humans

“The conversation about AI bias is broadening,” said psychologist Tara Behrend, PhD, a professor at Michigan State University’s School of Human Resources and Labor Relations who studies human-technology interaction and spoke at CES about AI and privacy. “Agencies and various academic stakeholders are really taking the role of psychology seriously.”


NY State Bar Association Joins Florida and California on AI Ethics Guidance – Suggests Some Surprising Implications — from natlawreview.com by James G. Gatto

The NY State Bar Association (NYSBA) Task Force on Artificial Intelligence has issued a nearly 80 page report (Report) and recommendations on the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession. This detailed Report also reviews AI-based software, generative AI technology and other machine learning tools that may enhance the profession, but which also pose risks for individual attorneys’ understanding of new, unfamiliar technology, as well as courts’ concerns about the integrity of the judicial process. It also makes recommendations for NYSBA adoption, including proposed guidelines for responsible AI use. This Report is perhaps the most comprehensive report to date by a state bar association. It is likely this Report will stimulate much discussion.

For those of you who want the “Cliff Notes” version of this report, here is a table that summarizes by topic the various rules mentioned and a concise summary of the associated guidance.

The Report includes four primary recommendations:


 

 

 

The University Student’s Guide To Ethical AI Use  — from studocu.com; with thanks to Jervise Penton at 6XD Media Group for this resource

This comprehensive guide offers:

  • Up-to-date statistics on the current state of AI in universities, how institutions and students are currently using artificial intelligence
  • An overview of popular AI tools used in universities and its limitations as a study tool
  • Tips on how to ethically use AI and how to maximize its capabilities for students
  • Current existing punishment and penalties for cheating using AI
  • A checklist of questions to ask yourself, before, during, and after an assignment to ensure ethical use

Some of the key facts you might find interesting are:

  • The total value of AI being used in education was estimated to reach $53.68 billion by the end of 2032.
  • 68% of students say using AI has impacted their academic performance positively.
  • Educators using AI tools say the technology helps speed up their grading process by as much as 75%.
 

AI-related tools and tips dominate ’60 in 60′ Techshow session — from abajournal.com by Danielle Braff

Four days of seminars, lectures and demonstrations at the 39th annual ABA Techshow boiled down to Saturday morning’s grand finale, where panelists rounded up their favorite tech tips and apps. The underlying theme: artificial intelligence.

“It’s an amazing tool, but it’s kind of scary, so watch out,” said Cynthia Thomas, the Techshow co-chair, and owner of PLMC & Associates, talking about the new tool from OpenAI, Sora, which takes text and turns it into video.

Other panelists during the traditional Techshow closer, “60 sites, 60 tips and gadgets and gizmos,” highlighted a wide of AI-enabled or augmented tools to help users perform a large range of tasks, including quickly sift through user reviews for products, generate content, or keep up-to-date on the latest AI tools. For those looking for a non-AI tips and tools, they also suggested several devices, websites, tips and apps that have helped them with their practice and with life in general.


ABA Techshow 2024: Ethics in the Age of Legal Technology — from bnnbreaking.com by Rafia Tasleem

ABA Techshow 2024 stressed the importance of ethics in legal technology adoption. Ethics lawyer Stuart I. Teicher warned of the potential data breaches and urged attorneys to be proactive in understanding and supervising new tools. Education and oversight are key to maintaining data protection and integrity.


Startup Alley Competition Proves It Continues To Be All About AI — from abovethelaw.com by Joe Patrice

Though it might be more accurate to call TECHSHOW an industry showcase because with each passing year it seems that more and more of the show involves other tech companies looking to scoop up enterprising new companies. A tone that’s set by the conference’s opening event: the annual Startup Alley pitch competition.

This year, 15 companies presented. If you were taking a shot every time someone mentioned “AI” then my condolences because you are now dead. If you included “machine learning” or “large language model” then you’ve died, come back as a zombie, and been killed again.


Here Are the Winners of ABA Techshow’s 8th Annual Startup Alley Pitch Competition — from lawnext.com by Bob Ambrogi

Here were the companies that won the top three spots:

  1. AltFee, a product that helps law firms replace the billable hour with fixed-fee pricing.
  2. Skribe.ai, an alternative to traditional court reporting that promises “a better way to take testimony.”
  3. Paxton AI, an AI legal assistant.

Class action firms ask US federal courts to encourage virtual testimony — from reuters.com by Nate Raymond

Summary:

  • Lawyers at Hagens Berman are leading charge to change rules
  • Proposal asks judiciary to ‘effectuate a long overdue modernization’ of rules

 

From DSC:
The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.

The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.

We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.

LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.

What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.


Some relevant items:

Navigating Attention-Driving Algorithms, Capturing the Premium of Proximity for Virtual Teams, & New AI Devices — from implactions.com by Scott Belsky

Excerpts (emphasis DSC):

No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithms powering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.

Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.

The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.

Our feeds are being sourced in ways that dramatically change the content we’re exposed to.

And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”

How do we know when we’ve been polarized? This is the most important question of the day.

Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.


 

 

OpenAI announces leadership transition — from openai.com
Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company. Search process underway to identify permanent successor.

Excerpt (emphasis DSC):

The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.


As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

From DSC:
I’m not here to pass judgment, but all of us on planet Earth should be at least concerned with this disturbing news.

AI is one of the most powerful set of emerging technologies on the planet right now. OpenAI is arguably the most powerful vendor/innovator/influencer/leader in that space. And Sam Altman is was the face of OpenAI — and arguably for AI itself. So this is a big deal.

What concerns me is what is NOT being relayed in this posting:

  • What was being hidden from OpenAI’s Board?
  • What else doesn’t the public know? 
  • Why is Greg Brockman stepping down as Chairman of the Board?

To whom much is given, much is expected.


Also related/see:

OpenAI CEO Sam Altman ousted, shocking AI world — from washingtonpost.com by Gerrit De Vynck and Nitasha Tiku
The artificial intelligence company’s directors said he was not ‘consistently candid in his communications with the board’

Altman’s sudden departure sent shock waves through the technology industry and the halls of government, where he had become a familiar presence in debates over the regulation of AI. His rise and apparent fall from tech’s top rung is one of the fastest in Silicon Valley history. In less than a year, he went from being Bay Area famous as a failed start-up founder who reinvented himself as a popular investor in small companies to becoming one of the most influential business leaders in the world. Journalists, politicians, tech investors and Fortune 500 CEOs alike had been clamoring for his attention.

OpenAI’s Board Pushes Out Sam Altman, Its High-Profile C.E.O. — from nytimes.com by Cade Metz

Sam Altman, the high-profile chief executive of OpenAI, who became the face of the tech industry’s artificial intelligence boom, was pushed out of the company by its board of directors, OpenAI said in a blog post on Friday afternoon.


From DSC:
Updates — I just saw these items

.
Sam Altman fired as CEO of OpenAI — from theverge.com by Jay Peters
In a sudden move, Altman is leaving after the company’s board determined that he ‘was not consistently candid in his communications.’ President and co-founder Greg Brockman has also quit.



 

Why Kindness at Work Pays Off — from hbr.org by Andrew Swinand; via Roberto Ferraro

Summary:
Whether you’re just entering the workforce, starting a new job, or transitioning into people management, kindness can be a valuable attribute that speaks volumes about your character, commitment, and long-term value. Here are a few simple routines you can integrate into your everyday work life that will spread kindness and help create a culture of kindness at your organization.

  • Practice radical self-care. The best way to be a valuable, thoughtful team member is to be disciplined about your own wellness — your physical, emotional, and mental well-being.
  • Do your job. Start with the basics by showing up on time and doing your job to the best of your ability. This is where your self-care practice comes into play — you can’t do your best work without taking care of yourself first.
  • Reach out to others with intention. Make plans to meet virtually or, even better, in person with your colleagues. Ask about their pets, their recent move, or their family. Most importantly, practice active listening.
  • Recognize and acknowledge people. Authentic, thoughtful interactions show that you’re thinking about the other person and reflecting on their unique attributes and value, which can cement social connections.
  • Be conscientious with your feedback. Being kind means offering feedback for the betterment of the person receiving it and the overall success of your company.

“When anxiety is high and morale is low, kindness isn’t a luxury — it’s a necessity. With mass layoffs, economic uncertainty, and geopolitical tensions, kindness is needed now more than ever, especially at work.”

 

41 states sue Meta, claiming Instagram, Facebook are addictive, harm kids — from washingtonpost.com by Cristiano Lima and Naomi Nix
The action marks the most sprawling state challenge to date over social media’s impact on children’s mental health

Forty-one states and the District of Columbia are suing Meta, alleging that the tech giant harms children by building addictive features into Instagram and Facebook. Tuesday’s legal actions represent the most significant effort by state enforcers to rein in the impact of social media on children’s mental health.

 

How can a young person stay on the path of purity?
    By living according to your word.
10 I seek you with all my heart;
    do not let me stray from your commands.
11 I have hidden your word in my heart
    that I might not sin against you.
12 Praise be to you, Lord;
    teach me your decrees.

Proverbs 19:20-21

20 Listen to advice and accept discipline,
and at the end you will be counted among the wise.
21 Many are the plans in a person’s heart,
but it is the Lord’s purpose that prevails.

Jeremiah 18:7-10

If at any time I announce that a nation or kingdom is to be uprooted, torn down and destroyed, and if that nation I warned repents of its evil, then I will relent and not inflict on it the disaster I had planned. And if at another time I announce that a nation or kingdom is to be built up and planted, 10 and if it does evil in my sight and does not obey me, then I will reconsider the good I had intended to do for it.

You have laid down precepts
    that are to be fully obeyed.
Oh, that my ways were steadfast
    in obeying your decrees!
Then I would not be put to shame
    when I consider all your commands.

 

Comparing Online and AI-Assisted Learning: A Student’s View — from educationnext.org by Daphne Goldstein
An 8th grader reviews traditional Khan Academy and its AI-powered tutor, Khanmigo

Hi everyone, I’m Daphne, a 13-year-old going into 8th grade.

I’m writing to compare “regular” Khan Academy (no AI) to Khanmigo (powered by GPT4), using three of my own made-up criteria.

They are: efficiency, effectiveness, and enjoyability. Efficiency is how fast I am able to cover a math topic and get basic understanding. Effectiveness is my quality of understanding—the difference between basic and advanced understanding. And the final one—most important to kids and maybe least important to adults who make kids learn math—is enjoyability.


7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly
Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.


First Impressions with GPT-4V(ision) — from blog.roboflow.com by James Gallagher; via Donald Clark on LinkedIn

On September 25th, 2023, OpenAI announced the rollout of two new features that extend how people can interact with its recent and most advanced model, GPT-4: the ability to ask questions about images and to use speech as an input to a query.

This functionality marks GPT-4’s move into being a multimodal model. This means that the model can accept multiple “modalities” of input – text and images – and return results based on those inputs. Bing Chat, developed by Microsoft in partnership with OpenAI, and Google’s Bard model both support images as input, too. Read our comparison post to see how Bard and Bing perform with image inputs.

In this guide, we are going to share our first impressions with the GPT-4V image input feature.


 

Don’t Be Fooled: How You Can Master Media Literacy in the Digital Age — from youtube.com by Professor Sue Ellen Christian

During this special keynote presentation, Western Michigan University (WMU) professor Sue Ellen Christian speaks about the importance of media literacy for all ages and how we can help educate our friends and families about media literacy principles. Hosted by the Grand Rapids Public Library and GRTV, a program of the Grand Rapids Community Media Center. Special thanks to the Grand Rapids Public Library Foundation for their support of this program.

Excerpts:

Media Literacy is the ability to access, analyze, evaluate, and create media in a variety of forms. Center for Media Literacy

5 things to do when confronted with concerns about content.


Also relevant/see:

Kalamazoo Valley Museum’s newest exhibit teaches community about media literacy — from mlive.com by Gabi Broekema

 
© 2024 | Daniel Christian