10 jobs that are safe in an AI world — from linkedin.com by Kai-Fu Lee

Excerpts:

Teaching
AI will be a great tool for teachers and educational institutions, as it will help educators figure out how to personalize curriculum based on each student’s competence, progress, aptitude, and temperament. However, teaching will still need to be oriented around helping students figure out their interests, teaching students to learn independently, and providing one-on-one mentorship. These are tasks that can only be done by a human teacher. As such, there will still be a great need for human educators in the future.

Criminal defense law
Top lawyers will have nothing to worry about when it comes to job displacement. reasoning across domains, winning the trust of clients, applying years of experience in the courtroom, and having the ability to persuade a jury are all examples of the cognitive complexities, strategies, and modes of human interaction that are beyond the capabilities of AI. However, a lot of paralegal and preparatory work like document review, analysis, creating contracts, handling small cases, packing cases, and coming up with recommendations can be done much better and more efficiently with AI. The costs of law make it worthwhile for AI companies to go after AI paralegals and AI junior lawyers, but not top lawyers.

 

From DSC:
In terms of teaching, I agree that while #AI will help personalize learning, there will still be a great need for human teachers, professors, and trainers. I also agree w/ my boss (and with some of the author’s viewpoints here, but not all) that many kinds of legal work will still need the human touch & thought processes. I diverge from his thinking in terms of scope — the need for human lawyers will go far beyond just lawyers involved in crim law.

 

Also see:

15 business applications for artificial intelligence and machine learning — from forbes.com

Excerpt:

Fifteen members of Forbes Technology Council discuss some of the latest applications they’ve found for AI/ML at their companies. Here’s what they had to say…

 

 

 

How AI could help solve some of society’s toughest problems — from technologyreview.com by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

Fei Fang has saved lives. But she isn’t a lifeguard, medical doctor, or superhero. She’s an assistant professor at Carnegie Mellon University, specializing in artificial intelligence for societal challenges.

At MIT Technology Review’s EmTech conference on Wednesday, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

How AI can be a force for good — from science.sciencemag.org by Mariarosaria Taddeo & Luciano Floridi

Excerpts:

Invisibility and Influence
AI supports services, platforms, and devices that are ubiquitous and used on a daily basis. In 2017, the International Federation of Robotics suggested that by 2020, more than 1.7 million new AI-powered robots will be installed in factories worldwide. In the same year, the company Juniper Networks issued a report estimating that, by 2022, 55% of households worldwide will have a voice assistant, like Amazon Alexa.

As it matures and disseminates, AI blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way. While creating new opportunities, this invisible integration of AI into our environments poses further ethical issues. Some are domain-dependent. For example, trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace. But the integration of AI also poses another fundamental risk: the erosion of human self-determination due to the invisibility and influencing power of AI.

To deal with the risks posed by AI, it is imperative to identify the right set of fundamental ethical principles to inform the design, regulation, and use of AI and leverage it to benefit as well as respect individuals and societies. It is not an easy task, as ethical principles may vary depending on cultural contexts and the domain of analysis. This is a problem that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems tackles with the aim of advancing public debate on the values and principles that should underpin ethical uses of AI.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

Excerpt:

“We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds,” Moore said. “I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do. This is now practical.”

Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.

“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee. “Googlers want less alignment with the military-industrial complex, not more. This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”

 

 

Organizations Are Gearing Up for More Ethical and Responsible Use of Artificial Intelligence, Finds Study — from businesswire.com
Ninety-two percent of AI leaders train their technologists in ethics; 74 percent evaluate AI outcomes weekly, says report from SAS, Accenture Applied Intelligence, Intel, and Forbes Insights

Excerpt:

AI oversight is not optional

Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognize that oversight is not optional for these technologies. Nearly three-quarters (74 percent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 percent). Additionally, 43 percent of AI leaders shared that their organization has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 percent).

 

 

 

Do robots have rights? Here’s what 10 people and 1 robot have to say — from createdigital.org.au
When it comes to the future of technology, nothing is straightforward, and that includes the array of ethical issues that engineers encounter through their work with robots and AI.

 

 

 

To higher ed: When the race track is going 180mph, you can’t walk or jog onto the track. [Christian]

From DSC:
When the race track is going 180mph, you can’t walk or jog onto the track.  What do I mean by that? 

Consider this quote from an article that Jeanne Meister wrote out at Forbes entitled, “The Future of Work: Three New HR Roles in the Age of Artificial Intelligence:”*

This emphasis on learning new skills in the age of AI is reinforced by the most recent report on the future of work from McKinsey which suggests that as many as 375 million workers around the world may need to switch occupational categories and learn new skills because approximately 60% of jobs will have least one-third of their work activities able to be automated.

Go scan the job openings and you will likely see many that have to do with technology, and increasingly, with emerging technologies such as artificial intelligence, deep learning, machine learning, virtual reality, augmented reality, mixed reality, big data, cloud-based services, robotics, automation, bots, algorithm development, blockchain, and more. 

 

From Robert Half’s 2019 Technology Salary Guide 

 

 

How many of us have those kinds of skills? Did we get that training in the community colleges, colleges, and universities that we went to? Highly unlikely — even if you graduated from one of those institutions only 5-10 years ago. And many of those institutions are often moving at the pace of a nice leisurely walk, with some moving at a jog, even fewer are sprinting. But all of them are now being asked to enter a race track that’s moving at 180mph. Higher ed — and society at large — are not used to moving at this pace. 

This is why I think that higher education and its regional accrediting organizations are going to either need to up their game hugely — and go through a paradigm shift in the required thinking/programming/curricula/level of responsiveness — or watch while alternatives to institutions of traditional higher education increasingly attract their learners away from them.

This is also, why I think we’ll see an online-based, next generation learning platform take place. It will be much more nimble — able to offer up-to-the minute, in-demand skills and competencies. 

 

 

The below graphic is from:
Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages

 

 

 


 

* Three New HR Roles To Create Compelling Employee Experiences
These new HR roles include:

  1. IBM: Vice President, Data, AI & Offering Strategy, HR
  2. Kraft Heinz Senior Vice President Global HR, Performance and IT
  3. SunTrust Senior Vice President Employee Wellbeing & Benefits

What do these three roles have in common? All have been created in the last three years and acknowledge the growing importance of a company’s commitment to create a compelling employee experience by using data, research, and predictive analytics to better serve the needs of employees. In each case, the employee assuming the new role also brought a new set of skills and capabilities into HR. And importantly, the new roles created in HR address a common vision: create a compelling employee experience that mirrors a company’s customer experience.

 


 

An excerpt from McKinsey Global Institute | Notes from the Frontier | Modeling the Impact of AI on the World Economy 

Workers.
A widening gap may also unfold at the level of individual workers. Demand for jobs could shift away from repetitive tasks toward those that are socially and cognitively driven and others that involve activities that are hard to automate and require more digital skills.12 Job profiles characterized by repetitive tasks and activities that require low digital skills may experience the largest decline as a share of total employment, from some 40 percent to near 30 percent by 2030. The largest gain in share may be in nonrepetitive activities and those that require high digital skills, rising from some 40 percent to more than 50 percent. These shifts in employment would have an impact on wages. We simulate that around 13 percent of the total wage bill could shift to categories requiring nonrepetitive and high digital skills, where incomes could rise, while workers in the repetitive and low digital skills categories may potentially experience stagnation or even a cut in their wages. The share of the total wage bill of the latter group could decline from 33 to 20 percent.13 Direct consequences of this widening gap in employment and wages would be an intensifying war for people, particularly those skilled in developing and utilizing AI tools, and structural excess supply for a still relatively high portion of people lacking the digital and cognitive skills necessary to work with machines.

 


 

 

How AI could help solve some of society’s toughest problems — from MIT Tech Review by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

At MIT Technology Review’s EmTech conference, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

  • Andrew Moore, the new chief of Google Cloud AI, co-chairs a task force on AI and national security with deep defense sector ties.
  • Moore leads the task force with Robert Work, the man who reportedly helped to create Project Maven.
  • Moore has given various talks about the role of AI and defense, once noting that it was now possible to deploy drones capable of surveilling “pretty much the whole world.”
  • One former Googler told Business Insider that the hiring of Moore is a “punch in the face” to those employees.

 

 

How AI can be a force for good — from science.sciencemag.org

Excerpt:

The AI revolution is equally significant, and humanity must not make the same mistake again. It is imperative to address new questions about the nature of post-AI societies and the values that should underpin the design, regulation, and use of AI in these societies. This is why initiatives like the abovementioned AI4People and IEEE projects, the European Union (EU) strategy for AI, the EU Declaration of Cooperation on Artificial Intelligence, and the Partnership on Artificial Intelligence to Benefit People and Society are so important (see the supplementary materials for suggested further reading). A coordinated effort by civil society, politics, business, and academia will help to identify and pursue the best strategies to make AI a force for good and unlock its potential to foster human flourishing while respecting human dignity.

 

 

Ethical regulation of the design and use of AI is a complex but necessary task. The alternative may lead to devaluation of individual rights and social values, rejection of AI-based innovation, and ultimately a missed opportunity to use AI to improve individual wellbeing and social welfare.

 

 

Robot wars — from ethicaljournalismnetwork.org by James Ball
How artificial intelligence will define the future of news

Excerpt:

There are two paths ahead in the future of journalism, and both of them are shaped by artificial intelligence.

The first is a future in which newsrooms and their reporters are robust: Thanks to the use of artificial intelligence, high-quality reporting has been enhanced. Not only do AI scripts manage the writing of simple day-to-day articles such as companies’ quarterly earnings updates, they also monitor and track masses of data for outliers, flagging these to human reporters to investigate.

Beyond business journalism, comprehensive sports stats AIs keep key figures in the hands of sports journalists, letting them focus on the games and the stories around them. The automated future has worked.

The alternative is very different. In this world, AI reporters have replaced their human counterparts and left accountability journalism hollowed out. Facing financial pressure, news organizations embraced AI to handle much of their day-to-day reporting, first for their financial and sports sections, then bringing in more advanced scripts capable of reshaping wire copy to suit their outlet’s political agenda. A few banner hires remain, but there is virtually no career path for those who would hope to replace them ? and stories that can’t be tackled by AI are generally missed.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Alibaba looks to arm hotels, cities with its AI technology — from zdnet.com by Eileen Yu
Chinese internet giant is touting the use of artificial intelligence technology to arm drivers with real-time data on road conditions as well as robots in the hospitality sector, where they can deliver meals and laundry to guests.

Excerpt:

Alibaba A.I. Labs’ general manager Chen Lijuan said the new robots aimed to “bridge the gap” between guest needs and their expected response time. Describing the robot as the next evolution towards smart hotels, Chen said it tapped AI technology to address painpoints in the hospitality sector, such as improving service efficiencies.

Alibaba is hoping the robot can ease hotels’ dependence on human labour by fulfilling a range of tasks, including delivering meals and taking the laundry to guests.

 

 

Accenture Introduces Ella and Ethan, AI Bots to Improve a Patient’s Health and Care Using the Accenture Intelligent Patient Platform — from marketwatch.com

Excerpt:

Accenture has enhanced the Accenture Intelligent Patient Platform with the addition of Ella and Ethan, two interactive virtual-assistant bots that use artificial intelligence (AI) to constantly learn and make intelligent recommendations for interactions between life sciences companies, patients, health care providers (HCPs) and caregivers. Designed to help improve a patient’s health and overall experience, the bots are part of Accenture’s Salesforce Fullforce Solutions powered by Salesforce Health Cloud and Einstein AI, as well as Amazon’s Alexa.

 

 

German firm’s 7 commandments for ethical AI — from france24.com

Excerpt:

FRANKFURT AM MAIN (AFP) –
German business software giant SAP published Tuesday an ethics code to govern its research into artificial intelligence (AI), aiming to prevent the technology infringing on people’s rights, displacing workers or inheriting biases from its human designers.

 

 

 

 

The future of drug discovery and AI – the role of man and machine — from techemergence.com by  Ayn de Jesus

Excerpt:

Episode Summary: This week on AI in Industry, we speak with Amir Saffari, Senior Vice President of AI at BenevolentAI, a London-based pharmaceutical company that uses machine learning to find new uses for existing drugs and new treatments for diseases.

In speaking with him, we aim to learn two things:

  • How will machine learning play a role in the phases of drug discovery, from generating hypotheses to clinical trials?
  • In the future, what are the roles of man and machine in drug discovery? What processes will machines automate and potentially do better than humans in this field?

 

A few other articles caught my eye as well:

  • This little robot swims through pipes and finds out if they’re leaking — from fastcompany.com by Adele Peters
    Lighthouse, U.S. winner of the James Dyson Award, looks like a badminton birdie and detects the suction of water leaving pipes–which is a lot of water that we could put to better use.
    .
  • Samsung’s New York AI center will focus on robotics — from engadget.com by Saqib Shah
    NYU’s AI Now Institute is close-by and Samsung is keen for academic input.
    Excerpt:
    Samsung now has an artificial intelligence center in New York City — its third in North America and sixth in total — with an eye on robotics; a first for the company. It opened in Chelsea, Manhattan on Friday, walking distance from NYU (home to its own AI lab) boosting Samsung’s hopes for an academic collaboration.
    .
  • Business schools bridge the artificial intelligence skills gap — from swisscognitive.ch
    Excerpt:
    Business schools such as Kellogg, Insead and MIT Sloan have introduced courses on AI over the past two years, but Smith is the first to offer a full programme where students delve deep into machine learning.

    “Technologists can tell you all about the technology but usually not what kind of business problems it can solve,” Carlsson says. With business leaders, he adds, it is the other way round — they have plenty of ideas about how to improve their company but little way of knowing what the new technology can achieve. “The foundational skills businesses need to hack the potential of AI is the understanding of what problems the tech is actually good at solving,” he says.

 

 

 

San Diego’s Nanome Inc. releases collaborative VR-STEM software for free — from vrscout.com by Becca Loux

Excerpt:

The first collaborative VR molecular modeling application was released August 29 to encourage hands-on chemistry experimentation.

The open-source tool is free for download now on Oculus and Steam.

Nanome Inc., the San Diego-based start-up that built the intuitive application, comprises UCSD professors and researchers, web developers and top-level pharmaceutical executives.

 

“With our tool, anyone can reach out and experience science at the nanoscale as if it is right in front of them. At Nanome, we are bringing the craftsmanship and natural intuition from interacting with these nanoscale structures at room scale to everyone,” McCloskey said.

 

San Diego’s Nanome Inc. Releases Collaborative VR-STEM Software For Free

 

 

10 ways VR will change life in the near future — from forbes.com

Excerpts:

  1. Virtual shops
  2. Real estate
  3. Dangerous jobs
  4. Health care industry
  5. Training to create VR content
  6. Education
  7. Emergency response
  8. Distraction simulation
  9. New hire training
  10. Exercise

 

From DSC:
While VR will have its place — especially for timeswhen you need to completely immerse yourself into another environment — I think AR and MR will be much larger and have a greater variety of applications. For example, I could see where instructions on how to put something together in the future could use AR and/or MR to assist with that process. The system could highlight the next part that I’m looking for and then highlight the corresponding parts where it goes — and, if requested, can show me a clip on how it fits into what I’m trying to put together.

 

How MR turns firstline workers into change agents — from virtualrealitypop.com by Charlie Finkand
Mixed Reality, a new dimension of work — from Microsoft and Harvard Business Review

Excerpts:

Workers with mixed-reality solutions that enable remote assistance, spatial planning, environmentally contextual data, and much more,” Bardeen told me. With the HoloLens Firstline Workers workers conduct their usual, day-to-day activities with the added benefit of a heads-up, hands-free, display that gives them immediate access to valuable, contextual information. Microsoft says speech services like Cortana will be critical to control along with gesture, according to the unique needs of each situation.

 

Expect new worker roles. What constitutes an “information worker” could change because mixed reality will allow everyone to be involved in the collection and use of information. Many more types of information will become available to any worker in a compelling, easy-to-understand way. 

 

 

Let’s Speak: VR language meetups — from account.altvr.com

 

 

 

 

From DSC:
I vote that we change the color that we grade papers — whether on paper (harcopy) or whether via digitally/electronically-based annotations — from red to green. Why? Because here’s how I see the colors:

  • RED:
    • Failure. 
    • You got it wrong. Bad job.
    • Danger
    • Stop!
    • Can be internalized as, “I’m no good at (writing, math, social studies, science, etc…..) and I’ll never be any good at it (i.e., the fixed mindset; I was born this way and I can’t change things).
  • GREEN:
    • Growth
      • As in spring, flowers appearing, new leaves on the trees, new life
      • As in support of a growth mindset
      • It helps with more positive thoughts/internalized messages: I may have got it wrong, but I can use this as a teaching moment; this feedback helps me grow…it helps me identify my knowledge and/or skills gaps
    • Health
    • Go (not stop); i.e., keep going, keep learning
    • May help develop more of a love of learning (or at least have more positive experiences with learning, vs feeling threatened or personally put down)

 

 

 

Chris Lenihan from DiscoverDataScience.org emailed me to let me know about a recently published guide on their site that’s entitled, “A Guide for Women in STEM”. Discover Data Science partnered with Heather Ambler from the University of Pittsburgh and Aiden Ford from the University of Connecticut to help produce this guide. Per Chris, the guide covers :

  • An overview of the challenges women can face in STEM fields
  • Outlines reasons women should pursue a STEM related career
  • Provides tips on how to encourage girls at an early age to follow their passion
  • Gives the reader extensive links to pre-college programs available for women, followed by a listing of over 30 scholarship options available to women pursuing STEM related degrees

Chris mentioned that both current and aspiring students can benefit from this information as they look for inspiration in their careers. Their mission is to serve students by delivering accurate, high quality information presented in a simple, clean format and they hope that this guide achieves that.

Check it out. >>


Here’s a sample excerpt from that guide:



Pre-College Programs for Women in STEM

CURIE Academy is a one-week summer residential program for high school girls who excel in math and science. The focus is on juniors and seniors who may not have had prior opportunities to explore engineering, but want to learn more about the many opportunities in engineering in an interactive atmosphere.

G.R.A.D.E. CAMP is a week-long day program designed specifically for entering 8th to 12th grade girls who want to find out what engineering is all about through “hands-on” experience. G.R.A.D.E. CAMP emphasizes career exposure rather than career choice, so you can come just to experience something new.

Girlgeneering’s goal of a girls-only camp is to increase the interest of high ability young women in a career in engineering by combating stereotypes, creating connections, reducing the issue of competition for resources with boys, and demonstrating the real-world social impact of engineering. This one-week day camp will introduce middle school young women to the field of engineering by showing how engineering is connected to personal issues, social concerns, and community interests.

It’s a Girl Thing is a residential camp for girls. The goals are to provide girls with strong role models and dispel myths and misconceptions about science and careers in science. Campers experience university life, hands-on classes and recreational activities. In the past we have offered classes ranging from Nano Energy to Animal Science.

Smith Summer Science and Engineering Program (SSEP) is a four-week residential program for exceptional young women with strong interests in science, engineering and medicine. Each July, select high school students from across the country and abroad come to Smith College to do hands-on research with Smith faculty in the life and physical sciences and in engineering.

Survey the World of Engineering – is a one-week day camp that will allow you to develop your creativity as well as provide you with the opportunity to meet and speak with working engineers. For part of the camp, you will work on campus with different engineering departments, learning and completing hands-on projects to better understand the breadth and variety of different engineering fields. For the remainder of the camp, you will visit various corporate engineering plants such as General Electric, Procter & Gamble, and Northrop Grumman Xetron to meet professional engineers and see their work in action.

 



Addendums on 10/26

 


 

 

Reuters Top 100: The World’s Most Innovative Universities – 2017 — from reuters.com with thanks to eduwire for their posting on this

Excerpts:

Reuters’ annual ranking of the World’s Most Innovative Universities identifies and ranks the educational institutions doing the most to advance science, invent new technologies and power new markets and industries.

The top 10 innovative universities are:

  1. Stanford University
  2. Massachusetts Institute of Technology (MIT)
  3. Harvard University
  4. University of Pennsylvania
  5. KU Leuven
  6. KAIST
  7. University of Washington
  8. University of Michigan System
  9. University of Texas System
  10. Vanderbilt University

 

 

 

It’s Time for Student Agency to Take Center Stage — from gettingsmart.com by Marie Bjerede and Michael Gielniak

Excerpt:

Jason took ownership of his class project, exhibiting agency. Students who take ownership go beyond mere responsibility and conscientiously completing assignments. These students are focused on their learning, rather than their grade. They are genuinely interested in their work and are as likely as not to get up and work on a project on a Saturday morning, even though they don’t have to (and without considerations of extra credit.)

They complete their homework on time and may well go above and beyond, and they have interesting thoughts to add to classroom dialog. For many teachers, they are a joy to teach, but they are also the ones who may ask the hard questions and they may be quick to point out what they see as hypocrisy in the authority figures.

“Responsible” students, on the other hand, are compliant. Most teachers think they are a joy to teach. They complete their homework without fail, and pay attention and participate in class. These are the kids typically considered “good” students. They usually win most of the academic awards because they are thought of as the “best and brightest.”

Responsible students are concerned about their grades, and can be identified when they ask questions like::

  • “Will that be on the test?”
  • “How many words do I have to write?”
  • “What does it take to get an A?”

Students who take ownership, on the other hand ask questions like:

  • “There are several different viewpoints on this subject so why is that, and what does it mean?
  • “Is what you are teaching, or what is in my textbook, consistent with my research?”
  • “Why is this important?”

Compliance or agency? We need to decide.

 

 

The past decades have been the age of the responsible, compliant student. Students who used to be able to get into college and then immediately secure a good job. But the world and the workforce have changed.

 

 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian