AI Society for 12.2.24 – Oh Canada!

AI Society for 12.2.24 – Oh Canada!

Today:  Canada v OpenAI copyright violation, more use cases and prompt engineering, advantage, maybe, of copilots, a tutorial on Gen Ai music creation, AI insurance fraud, and the AI ‘Manhattan Project’

 

On the copyright front, more on lawsuits, this time from a group of Canada’s leading media outlets, paralleling what we’ve seen thus-far in the US.  From the ‘NY Times’ reporting:

 

·      Five of the country’s major news companies, including the publishers of its top newspapers, newswires and the national broadcaster, filed the joint suit in the Ontario Superior Court of Justice on Friday morning.

·      While this is the first such lawsuit in Canada, it is similar to a suit brought against OpenAI and Microsoft in the United States in 2023 by The New York Times, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit’s claims.

·      It accuses OpenAI of ignoring the Canadian news outlets’ use of specific technological and legal tools — such the Robot Exclusion Protocol, copyright disclaimers and paywalls — in place to prevent scraping or other types of unauthorized copying of their published content.

 

 

Two items to file away.  The first is a good primer by Henrique Centieiro and Bee Lee in ‘Limitless Investor’ on GPT use cases that are useful on a day-to-day basis.  Examples include cooking, spreadsheet analysis, handwriting analysis, and technical analysis.  And the second by Pranav Mehta in ‘Generative AI’ details how to supercharge your prompts such as forcing use of o1-preview for free or setting the ‘temperature’ setting.

 

Another update on software development and Ai copilots. Enrique Dans describes a recent Microsoft study the concluded:

 

·      In blind reviews, code written with GitHub Copilot was shown to have significantly fewer readability errors, allowing developers to write an average of 13.6% more lines of code without encountering such issues.

·      Readability improved by 3.62%, reliability by 2.94%, maintainability by 2.47%, and level of conciseness by 4.16%, all statistically significant percentages. Developers were 5% more likely to approve code produced using GitHub Copilot, resulting in it being ready to merge sooner, speeding up the time to fix bugs or to implement new functionality.

·      In addition, the tool apparently helps developers write code up to 55% faster, making 88% of developers feel more focused and 85% of them more confident in the code.

 

But take this with a grain of salt understanding that as a software developer, you still must understand what the copilot generates.  He adds:

 

In contrast, other studies are critical of the tool, concluding that developers with access to Copilot had a significantly higher error rate, possibly derived from the level of experience not only in software development, but in the use of the tool itself. Another study showed “downward pressure on code quality” as a result of using the tool.

 

On to creative, with Christopher Landschoot in ‘Whitebalance’ offers a tutorial on the current state of Gen Ai music, with the observation:

 

With any transformative new technology comes some growing pains, however. These new capabilities have raised alarm bells in the minds of many musicians, producers, rights holders, and creators, that generative music models could be an existential threat to their art and livelihoods. This has generated (pun intended) a rift between AI supporters and detractors, leaving many musicians and rights holders reticent to allow any AI model, generative or not, to use their data during training. This is an unfortunate binarization of an issue that actually has many shades of gray, as AI use cases can range from predatory to neutral to significantly benefiting creators and rights holders.

 

His post contrasts Gen AI and non-Gen AI and their uses, the technology behind Gen-AI audio models, copyright issues, and then offers a framework for the future.  His simple description of training is excellent:

 

Think of it like a child that learns how to build with Legos. If the child is shown the instructions for many Lego starship sets, she will be able to build new creations that are similar to the starship sets that she learned from. However, if she was never shown instructions for the Eiffel Tower set, she wouldn’t know how to build anything similar to the Eiffel Tower. So while she is able to build “new” creations, they will all be similar to starships. In the same vein, a model that is trained only on music would not know how to generate the sound of a dog barking.

 

Source: Christopher Landschoot

 

Turning to more nefarious uses of AI, Amir-Erfan Izadi posting in ‘The Deep Hub’ details insurance fraud, how AI is pushing the boundaries, and how companies like SAP are creating tools to counter, also leveraging AI.  He details auto insurance, life insurance, homeowner, and worker, including the different phases including claims, application, agents, and underwriting.  Then he describes how companies counter fraud, including network analysis, computer vision, and now, gen AI.

 

An interesting stat:

 

In the US, the FBI estimates that the average family pays $400 — $700 extra each year in additional premiums because of insurance fraud. Source

 

Source: Amir-Erfan Izadi

 

 

 

Lastly, a perspective by Ignacio de Gregorio, now a bit outdated given the US election results, on the call for a ‘Manhattan Project’ approach to AI.  I don’t agree with some of his conclusions, but good reading, nonetheless.  We can see are three ‘camps’ developing – the US, the EU, and China.  Good background reading is the referenced ‘Google Deepmind’ paper.

 

Source: Google Deepmind

 

 

 

 

 

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics