Virtual Ray

Creating Virtual Ray, a Custom GPT

One of the neat features of ChatGPT is the ability to create Custom GPTs. A Custom GPT allows you to build a bespoke tool on top of ChatGPT, creating a unique product. In this post I am going to tell you how I created a simple ‘Virtual Ray’ via a Custom GPT.

Manifesto for Synthetic Data

Draft Synthetic Data Manifesto

In this note I set out what I believe to be Synthetic Data, why we need to define synthetics data, and some guidelines that I think vendors and buyers should adopt. I have been involved in a wide range of discussion with a wide range of organisations, but these views are my views, they do not represent the views of anybody else.

Synthetic Data Panel at ESOMAR Congress

AI and Synthetic Data Dominate ESOMAR Congress in Athens

Last week saw over 1000 insight and market researchers from 78 countries gather in Athens for ESOMAR’s Annual Congress. With three stages and a host of side events, the range of topics discussed was immense, but one theme stood out above all the rest and that was AI. One aspect of AI in particular was widely discussed, reviewed and commented on – namely Synthetic Data.

Python and AI

Python, one of the reasons ChatGPT is pretty good at maths

One key reason ChatGPT is good at handling mathematical tasks and data analysis is its integration with Python, especially in its premium versions. Python not only powers the solutions provided but also allows users to see how those solutions are generated. In this post I show how you can take advantage of this.

AI Automating Tasks

Using the ChatGPT API to automate tasks

The web interface for ChatGPT is great, but it does not always deliver what we want. In this post I want to describe how creating a simple call to the ChatGPT API can give a range of benefits. The example I am going to share in this post comes from a course I am running on how to the ChatGPT API.

Person speaking nicely to a computer

Speaking Nicely to Generative AI Seems to Help Get Better Answers

In a paper published in April this year, researchers showed how speaking more collaboratively to LLMs seems to generate better responses. The paper describes how Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, and Xinyun Chen conducted a series of experiments to explore how different strategies for prompts delivered different answers.