Blogs
Integrations

Create a Copywriting Assistant with Locofy.ai and ChatGPT

Build a Next.js copywriting app from a Figma file using Locofy.ai, ChatGPT, and OpenAI API.

Goal: Explore how AI-powered tools can generate frontend code and also extend it.

This guide will take you step-by-step through the process of converting your Figma designs into Next.js code using Locofy.ai and then extending it and adding AI features using ChatGPT and OpenAI API.

Prerequisites:

  1. You will need your design ready in Figma.
  2. You will need a Locofy.ai account and have the plugin up and running on your Figma file.
  3. Your Locofy project must be configured to use Next.js.
  4. You will need an OpenAI account.

Turn Your Designs Into Next.js Code With Locofy.ai

This is the design file that we’ll be using in this guide. It is a single-page design with two text areas to collect inputs and output results.

Image of the demo app
  1. The first step is to tag our designs. Tagging allows us to turn our designs into working components, for example, buttons, dropdowns, inputs, and more. 

    You can also work with popular UI libraries such as Material and Bootstrap by tagging your elements.

    For this app, you’ll notice items such as:

    1. Buttons
    2. Inputs
  2. Now you can make your designs responsive by configuring the styles in the Styles & Layouts tab.

    Select your components that need to be responsive and on the Locofy plugin, under the Styles & Layouts tab, you will find different media queries & states. Select the breakpoint you want to make your design responsive to and then change the styles and layout accordingly.

    You can also use auto layout for your designs for responsiveness and use the Locofy plugin to take care of layouts on different screen sizes.

  3. You can trigger actions to build fully featured apps and websites. You can head over to the ‘Actions’ tab and assign actions for click events.

    You can scroll into view and open popups and URLs amongst other actions.

    Image of the action properties
  4. Now that our designs are tagged correctly and responsive, we can preview them.

    To preview your design, select the frame or layer that you want to view, and click the “Preview” button.

    Image of the preview button.

    Once you are satisfied with your preview, you can sync your designs to Locofy Builder to add more advanced settings, share your live prototype, and export code.

  5. Click the “View Code” button at the bottom right corner of the plugin & select the frames. Once the sync is complete, click on “View code in the Builder”.

  6. You can turn code generated by Locofy into components inside the Locofy Builder using Auto Components. This allows you to easily extend and reuse the code.

    Image of the component with props in Locofy Builder

    In our app, we can turn our header section into a component. You can quickly add props as well using the Locofy builder.

  7. Once the prototype matches the product vision, you can view the live responsive prototype that looks and feels like your end product. Then you can export the code.

    Image of the export screen.

    Make sure to select all the screens you need, as well as the correct framework and project settings. Once done, you can download your code in a Zip File by clicking the “Confirm Export” in the bottom right corner.

Exported Code Overview

After extracting the zip file you downloaded, you can see how the component and page files are structured.

Image of the project structure.

We have the UI components in the “components” folder, our routes in the “pages” folder, and our media assets in the “public” folder.

Extending Code with ChatGPT

  1. Log into the ChatGPT account by clicking the ‘Login’ button present in the center. If you don’t have an account, click on ‘Sign up’.

  2. Now you will be redirected to the application where you can interact with ChatGPT.

  3. We’ll be using ChatGPT to write React code for us and we can start by making the two checkboxes stateful. We also want to make sure that only one checkbox is selected at a time.

    For this, we will ask ChatGPT the following question.

  4. The above query will generate the following response:

    Image of ChatGPT output

    As you can see, it has generated the exact code we need with our custom logic requirement.

    Since we already have our textarea fields, we can copy the properties from the elements that ChatGPT has created and paste it into our “index.js” file inside the “/pages” directory.

  5. We can also make our textarea & button elements stateful using ChatGPT by running the following query:

    This will generate the following code:

    Again we can extract the logic that we need and paste it into our “index.js” file inside the “/pages” directory.

Your “index.js” file should look like this after making the changes:

Generating Content with OpenAI API

  1. To use the OpenAI API, you would need to create an account by visiting their API homepage.

  2. After creating the account, you see a page similar to the one shown below. Click your profile picture at the top right corner and then click on “View API Keys”.

    Image of the OpenAI page.
  3. Now click on “Create new secret key” to create a new API key. Make sure to copy the key as you won’t be able to view it again. 

  4. Now create a “.env” file in the root folder of your Next.js project and create a variable named “NEXT_PUBLIC_API_KEY” and assign the API key to it.

  5. We will use ChatGPT again to generate the code snippet for us that will help us integrate the OpenAI API in our app to create social media posts. We would also be using the “text-davinci-003” model for this, as it is the most capable model.

    We also want that the API call runs every time the “Generate" button is clicked.

    For this, we will query ChatGPT with the following:

    This will generate the following code:

  6. As you can see from the code snippet above, we have a function called “handleClick” that runs every time and uses fetch to call the OpenAI API. This function runs every time the button is clicked. 

    We can replace our “handleClick” function, which we generated in the earlier steps,   with this function. However, we would need to make some minor changes such as

    1. The “Authorization” value should be `Bearer ${process.env.NEXT_PUBLIC_API_KEY}`
    2. Our “prompt” value in the request body would become `Generate a social media post from: ${text1}`
    3. Finally, we would update our “text2” value with “setText2()” instead of “setPost()”
  7. Finally since we also want to generate blogs, we can edit our “prompt” based on the user’s selection of checkbox.

After making all the changes, your code should look like this:

That’s it! This is how you can use ChatGPT to extend Locofy-generated code and easily integrate AI functionalities to it via the OpenAI API.

With the Locofy.ai plugin, you will be able to generate pixel-perfect frontend code that can be broken down into components that can accept props and pages, while ChatGPT can help you extend it in record time.

By following the steps above, you can turn your designs into production-ready Next.js code with the ability to generate social media posts and blog pages using AI.

Keep your loco mode on!
Stay updated on the latest Locofy announcements and product updates

© 2024, Locofy Pte Ltd. All Rights Reserved.