The role of Synthetic Respondents in 'Human-centred' Research
In the growing buzz around generative AI, a new concept in research methodologies has arisen; "synthetic respondents". Instead of asking people the questions, a Large Language Model creates 'synthetic respondents' which you can ask as many questions as you like. And they will give you answers. And they will probably sound like real people. They will never get bored. They will never try to disguise their "true" thoughts and feelings (as David Ogilvy once said, “People don’t think what they feel, don’t say what they think, and don’t do what they say.”.) You can get answers from thousands of them, very quickly and at very little costs.
(Also - they never leave behind a bad smell, and won't eat all of your biscuits.)
But again - so obvious as to be barely worth mentioning - they aren't real people. They are synthetic - "made up." Just like the 'actors', pretending to be the sort of people we actually want to talk to.
They will do it faster. They will do it cheaper. Will they do it better - or at least, 'good enough'? Well... that's the real question.