Introducing Microsoft 365 Copilot — your copilot for work

Copilot — A whole new way to work — from news.microsoft.com

  • Copilot in Word writes, edits, summarizes and creates right alongside people as they work.
  • Copilot in PowerPoint enables the creation process by turning ideas into a designed presentation through natural language commands.
  • Copilot in Excel helps unlock insights, identify trends or create professional-looking data visualizations in a fraction of the time.
  • Copilot in Outlook can help synthesize and manage the inbox to allow more time to be spent on actually communicating.
  • Copilot in Teams makes meetings more productive with real-time summaries and action items directly in the context of the conversation.
  • Copilot in Power Platform will help developers of all skill levels accelerate and streamline development with low-code tools with the introduction of two new capabilities within Power Apps and Power Virtual Agents.
  • Business Chat brings together data from across documents, presentations, email, calendar, notes and contacts to help summarize chats, write emails, find key dates or even write a plan based on other project files.

Introducing Microsoft 365 Copilot – your copilot for work — from blogs.microsoft.com by Jared Spataro

“Today marks the next major step in the evolution of how we interact with computing, which will fundamentally change the way we work and unlock a new wave of productivity growth,” said Satya Nadella, Chairman and CEO, Microsoft. “With our new copilot for work, we’re giving people more agency and making technology more accessible through the most universal interface — natural language.”

Introducing Microsoft 365 Copilot — A whole new way to work — from microsoft.com by Colette Stallbaumer

Excerpt:

Copilot is integrated into Microsoft 365 in two ways. It works alongside you, embedded in the Microsoft 365 apps you use every day—Word, Excel, PowerPoint, Outlook, Teams, and more—to unleash creativity, unlock productivity, and uplevel skills. Today, we’re also announcing an entirely new experience: Business Chat. Business Chat works across the LLM, the Microsoft 365 apps, and your data—your calendar, emails, chats, documents, meetings, and contacts—to do things you’ve never been able to do before. You can give it natural language prompts like “tell my team how we updated the product strategy” and it will generate a status update based on the morning’s meetings, emails, and chat threads.


A new era for AI and Google Workspace — from workspace.google.com by Johanna Voolich Wright

Excerpt:

As we embark on this next journey, we will be bringing these new generative-AI experiences to trusted testers on a rolling basis throughout the year, before making them available publicly.

With these features, you’ll be able to:

  • draft, reply, summarize, and prioritize your Gmail
  • brainstorm, proofread, write, and rewrite in Docs
  • bring your creative vision to life with auto-generated images, audio, and video in Slides
  • go from raw data to insights and analysis via auto-completion, formula generation, and
  • contextual categorization in Sheets
  • generate new backgrounds and capture notes in Meet
  • enable workflows for getting things done in Chat

Here’s a look at the first set of AI-powered features, which make writing even easier.

 


9 ways ChatGPT will help CIOs — from enterprisersproject.com by Katie Sanders
What are the potential benefits of this popular tool? Experts share how it can help CIOs be more efficient and bring competitive differentiation to their organizations.

Excerpt:

Don’t assume this new technology will replace your job. As Mark Lambert, a senior consultant at netlogx, says, “CIOs shouldn’t view ChatGPT as a replacement for humans but as a new and exciting tool that their IT teams can utilize. From troubleshooting IT issues to creating content for the company’s knowledge base, artificial intelligence can help teams operate more efficiently and effectively.”



Would you let ChatGPT control your smart home? — from theverge.com by

While the promise of an inherently competent, eminently intuitive voice assistant — a flawless butler for your home — is very appealing, I fear the reality could be more Space Odyssey than Downton Abbey. But let’s see if I’m proven wrong.


How ChatGPT Is Being Used To Enhance VR Training — from vrscout.com by Kyle Melnick

Excerpt:

The company claims that its VR training program can be used to prepare users for a wide variety of challenging scenarios, whether you’re a recent college graduate preparing for a difficult job interview or a manager simulating a particularly tough performance review. Users can customize their experiences depending on their role and receive real-time feedback based on their interactions with the AI.


From DSC:
Below are some example topics/articles involving healthcare and AI. 


Role of AI in Healthcare — from doctorsexplain.media
The role of Artificial Intelligence (AI) in healthcare is becoming increasingly important as technology advances. AI has the potential to revolutionize the healthcare industry, from diagnosis and treatment to patient care and management. AI can help healthcare providers make more accurate diagnoses, reduce costs, and improve patient outcomes.

60% of patients uncomfortable with AI in healthcare settings, survey finds — from healthcaredive.com by Hailey Mensik

Dive Brief:

  • About six in 10 U.S. adults said they would feel uncomfortable if their provider used artificial intelligence tools to diagnose them and recommend treatments in a care setting, according to a survey from the Pew Research Center.
  • Some 38% of respondents said using AI in healthcare settings would lead to better health outcomes while 33% said it would make them worse, and 27% said it wouldn’t make much of a difference, the survey found.
  • Ultimately, men, younger people and those with higher education levels were the most open to their providers using AI.

The Rise of the Superclinician – How Voice AI Can Improve the Employee Experience in Healthcare — from medcitynews.com by Tomer Garzberg
Voice AI is the new frontier in healthcare. With its constantly evolving landscape, the healthcare […]

Excerpt:

Voice AI can generate up to 30% higher clinician productivity, by automating these healthcare use cases

  • Updating records
  • Provider duress
  • Platform orchestration
  • Shift management
  • Client data handoff
  • Home healthcare
  • Maintenance
  • Equipment ordering
  • Meal preferences
  • Case data queries
  • Patient schedules
  • Symptom logging
  • Treatment room setup
  • Patient condition education
  • Patient support recommendations
  • Medication advice
  • Incident management
  • … and many more

ChatGPT is poised to upend medical information. For better and worse. — from usatoday.com by Karen Weintraub

Excerpt:

But – and it’s a big “but” – the information these digital assistants provide might be more inaccurate and misleading than basic internet searches.

“I see no potential for it in medicine,” said Emily Bender, a linguistics professor at the University of Washington. By their very design, these large-language technologies are inappropriate sources of medical information, she said.

Others argue that large language models could supplement, though not replace, primary care.

“A human in the loop is still very much needed,” said Katie Link, a machine learning engineer at Hugging Face, a company that develops collaborative machine learning tools.

Link, who specializes in health care and biomedicine, thinks chatbots will be useful in medicine someday, but it isn’t yet ready.

 

From DSC:
Check this confluence of emerging technologies out!

Also see:

How to spot AI-generated text — from technologyreview.com by Melissa Heikkilä
The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Excerpt:

This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?

“If you have enough text, a really easy cue is the word ‘the’ occurs too many times,” says Daphne Ippolito, a senior research scientist at Google Brain, the company’s research unit for deep learning.

“A typo in the text is actually a really good indicator that it was human-written,” she adds.

7 Best Tech Developments of 2022 — from /thetechranch.comby

Excerpt:

As we near the end of 2022, it’s a great time to look back at some of the top technologies that have emerged this year. From AI and virtual reality to renewable energy and biotechnology, there have been a number of exciting developments that have the potential to shape the future in a big way. Here are some of the top technologies that have emerged in 2022:

 

The talent needed to adopt mobile AR in industry — from chieflearningofficer.com by Yao Huang Ph.D.

Excerpt:

Therefore, when adopting mobile AR to improve job performance, L&D professionals need to shift their mindset from offering training with AR alone to offering performance support with AR in the middle of the workflow.

The learning director from a supply chain industry pointed out that “70 percent of the information needed to build performance support systems already exists. The problem is it is all over the place and is available on different systems.”

It is the learning and development professional’s job to design a solution with the capability of the technology and present it in a way that most benefits the end users.

All participants revealed that mobile AR adoption in L&D is still new, but growing rapidly. L&D professionals face many opportunities and challenges. Understanding the benefits, challenges and opportunities of mobile AR used in the workplace is imperative.

A brief insert from DSC:
Augmented Reality (AR) is about to hit the mainstream in the next 1-3 years. It will connect the physical world with the digital world in powerful, helpful ways (and likely in negative ways as well). I think it will be far bigger and more commonly used than Virtual Reality (VR). (By the way, I’m also including Mixed Reality (MR) within the greater AR domain.) With Artificial Intelligence (AI) making strides in object recognition, AR could be huge.

Learning & Development groups should ask for funding soon — or develop proposals for future funding as the new hardware and software products mature — in order to upskill at least some members of their groups in the near future.

As within Teaching & Learning Centers within higher education, L&D groups need to practice what they preach — and be sure to train their own people as well.

 

Understanding the Overlap Between UDL and Digital Accessibility — from boia.org

Excerpt:

Implementing UDL with a Focus on Accessibility
UDL is a proven methodology that benefits all students, but when instructors embrace universal design, they need to consider how their decisions will affect students with disabilities.

Some key considerations to keep in mind:

  • Instructional materials should not require a certain type of sensory perception.
  • A presentation that includes images should have accurate alternative text (also called alt text) for those images.
  • Transcripts and captions should be provided for all audio content.
  • Color alone should not be used to convey information, since some students may not perceive color (or have different cultural understandings of colors).
  • Student presentations should also follow accessibility guidelines. This increases the student’s workload, but it’s an excellent opportunity to teach the importance of accessibility.
 

From DSC:
I received an email the other day re: a TytoCare Exam Kit. It said (with some emphasis added by me):

With a TytoCare Exam Kit connected to Spectrum Health’s 24/7 Virtual Urgent Care, you and your family can have peace of mind and a quick, accurate diagnosis and treatment plan whenever you need it without having to leave your home.

Your TytoCare Exam Kit will allow your provider to listen to your lungs, look inside your ears or throat, check your temperature, and more during a virtual visit.

Why TytoCare?

    • Convenience – With a TytoCare Exam Kit and our 24/7/365 On-Demand Virtual Urgent Care there is no drive, no waiting room, no waiting for an appointment.
    • Peace of Mind – Stop debating about whether symptoms are serious enough to do something about them.
    • Savings – Without the cost of gas or taking off work, you get the reliable exams and diagnosis you need. With a Virtual Urgent Care visit you’ll never pay more than $50. That’s cheaper than an in-person urgent care visit, but the same level of care.

From DSC:
It made me reflect on what #telehealth has morphed into these days. Then it made me wonder (again), what #telelegal might become in the next few years…? Hmmm. I hope the legal field can learn from the healthcare industry. It could likely bring more access to justice (#A2J), increased productivity (for several of the parties involved), as well as convenience, peace of mind, and cost savings.


 

 

Your iPhone Has 26 New Accessibility Tools You Shouldn’t Ignore — from ios.gadgethacks.com by Jovana Naumovski

Excerpt (emphasis DSC):

Magnifier has a new Door Detection option on iOS 16, which helps blind and low-vision users locate entryways when they arrive at their destination. The tool can tell you how far away the door is, if the door is open or closed, how to open it (push it, turn the knob, pull the handle, etc.), what any signs say (like room numbers), what any symbols mean (like people icons for restrooms), and more.

From DSC:
By the way, this kind of feature would be great to work in tandem with devices such as the Double Robotics Telepresence Robot — i.e., using Machine-to-Machine (M2M) communications to let the robot and automatic doors communicate with each other so that remote students can “get around on campus.”

 

It would be great to have M2M communications with mobile robots to get through doors and to open elevator doors as well

 


Along the lines of accessibility-related items, also relevant/see:

Microsoft introduces sign language for Teams — from inavateonthenet.net

Excerpt:

Microsoft has announced a sign language view for Teams to help signers and others who use sign language. The information on screen will be prioritised on centre stage, in a consistent location, throughout every meeting.

When sign language view is enabled, the prioritised video streams automatically appear at the right aspect ratio and at the highest available quality. Like pinning and captioning, sign language view is personal to each user and will not impact what others see in the meeting.


 

 
 

Virtual or in-person: The next generation of trial lawyers must be prepared for anything — from reuters.com by Stratton Horres and Karen L. Bashor

A view of the jury box (front), where jurors would sit in and look towards the judge's chair (C), the witness stand (R) and stenographer's desk (L) in court room 422 of the New York Supreme Court

Excerpt:

In this article, we will examine several key ways in which COVID-19 has changed trial proceedings, strategy and preparation and how mentoring programs can make a difference.

COVID-19 has shaken up the jury trial experience for both new and experienced attorneys. For those whose only trials have been conducted during COVID-19 restrictions and for everyone easing back into the in-person trials, these are key elements to keep in mind practicing forward. Firm mentoring programs should be considered to prepare the future generation of trial lawyers for both live and virtual trials.

From DSC:
I think law firms will need to expand the number of disciplines coming to their strategic tables. That is, as more disciplines are required to successfully practice law in the 21st century, more folks with technical backgrounds and/or abilities will be needed. Web front and back end developers, User Experience Designers, Instructional Designers, Audio/Visual Specialists, and others come to my mind. Such people can help develop the necessary spaces, skills, training, and mentoring programs mentioned in this article. As within our learning ecosystems, the efficient and powerful use of teams of specialists will deliver the best products and services.

 

How Long Should a Branching Scenario Be?— from christytuckerlearning.com by Christy Tucker
How long should a branching scenario be? Is 45 minutes too long? Is there an ideal length for a branching scenario?

Excerpt:

Most of the time, the branching scenarios and simulations I build are around 10 minutes long. Overall, I usually end up at 5-15 minutes for branching scenarios, with interactive video scenarios being at the longer end.

From DSC:
This makes sense to me, as (up to) 6 minutes turned out to be an ideal length for videos.

Excerpt from Optimal Video Length for Student Engagement — from blog.edx.org

The optimal video length is 6 minutes or shorter — students watched most of the way through these short videos. In fact, the average engagement time of any video maxes out at 6 minutes, regardless of its length. And engagement times decrease as videos lengthen: For instance, on average students spent around 3 minutes on videos that are longer than 12 minutes, which means that they engaged with less than a quarter of the content. Finally, certificate-earning students engaged more with videos, presumably because they had greater motivation to learn the material. (These findings appeared in a recent Wall Street Journal article, An Early Report Card on Massive Open Online Courses and its accompanying infographic.)

The take-home message for instructors is that, to maximize student engagement, they should work with instructional designers and video producers to break up their lectures into small, bite-sized pieces.

 

How Older Adults Access Resources Online — from blog.getsetup.io

Top Insights:

  • It’s clear that how older adults are using technology has changed. COVID has seen more and more older adults using a wide range of devices which means there is no one-size-fits-all approach to this audience.
  • In the United States, desktop devices are still the most common form of media consumption for virtual learning and health.
  • But, mobile devices are still the dominant device for passive content consumption.
  • Consumption by different US states varies based on the quality of internet infrastructure and availability of newer devices.
  • In India and Australiamobile devices outperform desktops for virtual learning.
  • Developing browser-first solutions for engagement is key to reaching a wider audience.
  • Applications and websites that aim to make the user experience as seamless as possible across multiple devices have a greater chance of being used and picked up more effectively by older adults of a variety of ages.
  • The variations in device types make it very challenging to build LIVE streaming technology that can scale across platforms.
  • Chrome is a dominant browser with the 55+ group allowing sophisticated video streaming applications to be built that was not possible over Internet Explorer.
  • While Zoom became the de facto standard for video-based sessions, older adult learners were 11x more likely to attend class in our browser Lounge than enter the Zoom classes.

Also relevant/see:

 

DSC: What?!?! How might this new type of “parallel reality” impact smart classrooms, conference rooms, and board rooms? And/or our living rooms? Will it help deliver more personalized learning experiences within a classroom?


 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

Coding Isn’t a Necessary Leadership Skill — But Digital Literacy Is — from hbr.org by Sophia Matveeva

Summary (emphasis DSC):

While most leaders now know that tech is a vital part of business, many are wondering what they really need to know about technology to succeed in the digital age. Coding bootcamps may appeal to some, but for many leaders, learning to code is simply not the best investment. It takes a long time to become a proficient coder, and it still doesn’t give you a holistic overview of how digital technologies get made. The good news is that most leaders don’t need to learn to code. Instead, they need to learn how to work with people who code. This means becoming a digital collaborator and learning how to work with developers, data scientists, user experience designers, and product managers — not completely retraining. The author presents four ways for non-technical leaders to become digital collaborators.

 

‘Hologram patients’ and mixed reality headsets help train UK medical students in world first — from uk.news.yahoo.com

Excerpts:

Medical students in Cambridge, England are experiencing a new way of “hands-on learning” – featuring the use of holographic patients.

Through a mixed reality training system called HoloScenarios, students at Addenbrooke’s Hospital, part of the Cambridge University Hospitals NHS Foundation Trust, are now being trained via immersive holographic patient scenarios in a world first.

The new technology is aimed at providing a more affordable alternative to traditional immersive medical simulation training involving patient actors, which can demand a lot of resources.

Developers also hope the technology will help improve access to medical training worldwide.

 
© 2022 | Daniel Christian