2023
—Present

Deepwork

In response to the current demand for high-performing thought leadership in any field, Jessica Peter and The Pudding created a satirical website that provides an AI-informed service that will generate the best professional profile you can imagine.

What if there were a way to fight back—at LinkedIn culture, at recruiting bots, at the pressure to be a so-called thought leader? That’s the vision of deepwork, a satirical yet functional website that offers AI tools that can generate a glowing CV, a desirable headshot and witty posts for your social media accounts. deepwork leads with the sentiment that “You’re Only Human" and therefore need tools to improve people's perceptions of you in the workplace. It allows users to manipulate their digital identities in accordance with the latest expectations in their fields. The website provides a detailed list of all the tools that can be used to create each element. It offers a fresh perspective on the intersection of technology and creativity, where AI can be used as a tool to both enhance and critique our participation with platforms such as LinkedIn and the pressure to be a perfect, high-performing, thought leaders. It also provides a vision of how AI could shape the way we present our professional identities in the future. Jess Peter, deepwork's author, notes on her website that, while deepwork isn’t a real company (yet), the technology used to generate its demos all comes from real, easy-to-access code you can find online, and the grief comes from real trends observed in at least some corporate settings. Some further details: The Resume Atelier was originally intended to be a GPT-2 model, but even though your LinkedIn profile information is available online and it’s perfectly legal to scrape that data for now, she didn't feel right about using it. But Max Woolf, Thomas Davis and the folks behind JSON Resume had already created an open-source fake resume generator based on a recurrent neural network. Their models were trained on some 6,000 real job resumes sourced from Github, which perhaps explains their technical leanings. All the resumes generated for deepwork were trained on models provided by the authors. The different experience levels are represented in the amount of data produced for certain fields (e.g., “Student” resumes generally have fewer examples of work experience than “Intermediate” or “Senior” resumes). The features used in Profile Picture Studio are based on the same principle. In lay terms, you take a model trained on many photos of people and computationally create a new artificial face that closely resembles your photo of choice. Then you tweak the newly generated face. In not so lay terms, this is called projecting the photo into the latent space of the model. The demos in this article were made using Github user Woctezuma’s code for projecting and altering photos. Woctezuma uses NVIDIA’s StyleGAN2 model, which has been trained on 70,000 photos of people sourced from Flickr. Peter wishes to acknowledge that the “Basic” to “Yassified” filter generally gives subjects lighter skin and predominantly European features the more “yassified” it is set. "I do not equate white skin with physical beauty, and neither does The Pudding," she writes. "This is a consequence of the underlying technology used. In writing this explanation, I don’t mean to excuse the implications of this filters’ use nor others like it, but it is beyond my capabilities to fix it and I wanted to include some representation of digital 'beautification.'" The Thought Leadership package uses OpenAI’s GPT-2 text generation model trained on a dataset of 8 million web pages that have been posted on Reddit. To generate text specific to her selected influencers, she downloaded each thought leader’s 3,200 most recent tweets (the limit imposed by the Twitter API). She excluded replies and retweets. She then fine-tuned the GPT-2 model on tweets from each specified combination of thought leader. Due to the uneven distribution of the number of tweets available for each user, some of the generated outputs resemble one author more than the other. The tweets were curated to omit incoherent gibberish; mentions by name of real people who are not celebrities, and anything too dark or political.