Using ChatGPT Capabilities in 1C Platform Applications

Alexander Biryukov

29.03.2024 12 min

GPT.png

The title of this text suggests that the following discussion will be about the trendy topic of using neural networks in various parts of human life.

But before we continue, let's consider how a neural network, such as ChatGPT, can help us in our work.

The most obvious answer is to help with writing code. For example, here is the code that the neural network created to calculate the factorial:

At first glance, everything is fine, and the code seems to be working. But an attentive developer will see an error: procedure CalculateFactorial is declared with keyword Procedure while it should be declared with keyword Function since it returns a value.

If we ask the AI to fix the error, the result will look like this:


This code is fully functional and can be safely used in an application.
In fact, such capabilities of neural networks have long been used by developers for various languages, and I see no reason why 1C developers should lag behind them. The only thing to keep in mind is that any code generated by a neural network needs additional verification. That is, no one has canceled the developers' work!

How else can we use the neural network? Is it possible to connect to ChatGPT from the 1C code?
Indeed, most of you already know the answer to this question. Those developers who are still in the dark can find the answer below. So, of course, ChatGPT has an API. And, of course, we can connect to this API and get all the features this neural network provides.

What can we automate using ChatGPT? Well, there are many things. It all depends on your imagination, but for starters, let's consider a fairly simple and most obvious example. We will write an application to create descriptions for products in our catalog.

Here is the task. We have a catalog of products. For these products, we need to create strong descriptions so that after reading them, customers would immediately wish to make a purchase.
Previously, companies would hire a special person for such a job, but now we will try to do it ourselves.

Let's get started!

The first step to getting access to the API is to register at https://openai.com/. Then, you need to sign in, and you will be taken to their portal (https://platform.openai.com/).

On this portal, follow the API keys link.

 


 

 

And create a new secret key:


Please note that the key is shown only once, the first time, and after that, it gets hidden. Therefore, it is immediately advisable to save it in a convenient place.

So, we have a secret key. Now, we need to understand how we will interact with the neural network. It's simple! We open the API description and see that if we want to communicate with the neural network via text, we need to work with chats: https://platform.openai.com/docs/api-reference/chat/object

 


 

 The entry point in this case will be the following:


The link (https://platform.openai.com/docs/api-reference/chat/create) provides a fairly detailed description of all the necessary parameters and examples, so I see no need to focus on this.

Well, now it's time to run the 1C:Enterprise platform and start coding there. Let's create a new empty 1C configuration in it. We put there three constants, which will store the path to the API and the access token. 

●       

Next, we create a new catalog Products with one additional attribute fullDescription. For this catalog, we need to add an item form where we add one command GetFullDescription:

 


We launch the platform in dialog mode and fill the Products catalog with several products.

One thing to note here. You must understand that the neural network needs to "know" something about your products to create an adequate description; otherwise, the result is not guaranteed. For example, if we ask to create a description for the product Goodyear Eagle F1 SuperSport R, it will be more relevant than the description of some unknown product.

Knowing this feature of neural networks, we put data in our catalog Products. I used the products from the herbliz.com website in this example. 


With the catalog filled in, it's time to return to 1C Designer and finally write the code that will interact with ChatGPT.

First, we create a handler for the GetFullDescription command:

&AtClient

Procedure cmdGetFullDescription(Command)

            cmdGetFullDescriptionAtServer();

EndProcedure

&AtServer

Procedure cmdGetFullDescriptionAtServer()

           

      structureReturn = getFullDescriptionFromAPI();

           

      If structureReturn.Error Then

           Message("Occurred error: " + structureReturn.Result);

           Return;

     EndIf;

           

     Object.fullDescription = structureReturn.Result.choices[0].message.content;

           

EndProcedure 

As you can see from the code, all the work is done in the getFullDescriptionFromAPI function, and if there is no error, we assign a new value to the fullDescription attribute.

So, here is the code for the getFullDescriptionFromAPI procedure:

&AtServer

Function getFullDescriptionFromAPI()

           

            ReturnParameters = New Structure;

           

            ReturnParameters.Insert("Result");

            ReturnParameters.Insert("Error",   False);

           

            structureBody = returnStructureBody(Object.Description);

           

            //string_API_PATH = "api.openai.com";

            string_API_PATH = TrimAll(Constants.ChatGPT_API_PATH.Get());

           

            //stringResourceAddress = "/v1/chat/completions";

            stringResourceAddress = TrimAll(Constants.ChatGPT_completions.Get());

           

            stringToken = TrimAll(Constants.ChatGPT_token.Get());

           

            HTTPConnection = New HTTPConnection(string_API_PATH,443,,,,, New OpenSSLSecureConnection);

                             

HTTPRequest = New HTTPRequest();

           

            HTTPRequest.ResourceAddress = stringResourceAddress;

           

            HTTPRequest.Headers.Insert("Content-Type",          "application/json");

            HTTPRequest.Headers.Insert("Authorization", "Bearer " + stringToken);

           

            JSONWriter = New JSONWriter;

            JSONWriter.SetString();

            WriteJSON(JSONWriter, structureBody);

            resultJSON = JSONWriter.Close();

           

            HTTPRequest.SetBodyFromString(resultJSON);

            HTTPResponse = HTTPConnection.Post(HTTPRequest);

           

            If HTTPResponse.StatusCode = 200 Then

                             

            JSONReader = New JSONReader;

            JSONReader.SetString(HTTPResponse.GetBodyAsString());

            ReturnParameters.Result = ReadJSON(JSONReader, False);

                             

            Else

                        ReturnParameters.Result = HTTPResponse.GetBodyAsString();

                        ReturnParameters.Error = True;

            EndIf;

           

            Return ReturnParameters;

           

EndFunction

The structure of the request body gets assembled inside the returnStructureBody function. Let's see what it looks like.

&AtServer

Function returnStructureBody(stringOurQuestion)

           

            structureBody = New Structure;

           

            structureBody.Insert("model","gpt-3.5-turbo");

           

            structureMessages = New Structure;

            structureMessages.Insert("role", "user");

            structureMessages.Insert("content", "Create an advertising description of this product " + TrimAll(stringOurQuestion));

           

            arrayMessages = New Array;

            arrayMessages.Add(structureMessages);

           

            structureBody.Insert("messages", arrayMessages);

           

            Return structureBody;

           

EndFunction

Basically, that's all we need for it to work! This code calls the neural network API, sends it a request to create the desired description, then receives the response and processes it accordingly.

Let's try it in action. We launch 1C in dialog mode, open the Products catalog, open the form of any product and click the Get full description button. If we did everything correctly, we get the result after a while.


I don't know about you, but I'm both fascinated and scared by the result. The things that used to be done by a person can now be handled with a computer in a matter of seconds. Does it mean that humans will soon not be needed at all?

But let’s get back on track.

Let's complicate our task a little. Let’s take a look at the API description. We see that there is a guide on how to work with images (https://platform.openai.com/docs/api-reference/images/object):

Let's make it possible to generate an advertising image of a product based on its description, similar to the previous example.

We add another constant to store the endpoint.


 

Then, we add a new attribute image of type ValueStorage to the Products catalog. This attribute will store the product image:


In the item’s form in catalog Products, we also create a new attribute refToImageof type String:


Then, we place this attribute in the item’s form in the catalog


And change the item type inside the form to the Image field.


In the next step, we create the Get image by description command and add it to the form as well. 


At this point, we are through with all the preparation steps and can proceed with coding.

The handler for the Get image by description command looks like this:

&AtClient

Procedure cmdGetImageByDescription(Command)

           

            structureResult = cmdGetImageByDescriptionAtServer();

           

            If structureResult.Error Then

                          Return;

            EndIf;

           

            refToImage = PutToTempStorage(structureResult.binaryDataPicture, UUID);

           

            Modified = True;

           

EndProcedure

First, we get the image (function cmdGetImageByDescriptionAtServer), which is then displayed on the item’s form. We use temporary storage for this (function PutToTempStorage). At the very end, we set the modification flag (Modified = True) so that a user does not forget to save the catalog item when closing the form. 

So, let's look at the function cmdGetImageByDescriptionAtServer:

&AtServer

Function cmdGetImageByDescriptionAtServer()

           

            structureReturn = getImageByDescriptionFromAPI();

           

            If structureReturn.Error Then

                       Message("Occurred error: " + structureReturn.Result);

                       Return structureReturn;

            EndIf;

           

            binaryDataPicture = downloadAndApplyPicture(structureReturn);

           

            structureReturn.Insert("binaryDataPicture", binaryDataPicture);

           

            Return structureReturn;



EndFunction

First, the platform calls function getImageByDescriptionFromAPI, which forms a request to the neural network API and receives a link to the generated image file. Then, if there is no error, the function downloadAndApplyPictureis called, which directly receives the file from the server. Then, the downloaded image, as binary data, is returned to the calling function.

The source code of the function getImageByDescriptionFromAPI:

&AtServer

Function getImageByDescriptionFromAPI()

           

            ReturnParameters = New Structure;

           

            ReturnParameters.Insert("Result");

            ReturnParameters.Insert("Error",   False);

           

            structureBody = returnStructureBodyForImage(Object.Description);

           

            string_API_PATH = TrimAll(Constants.ChatGPT_API_PATH.Get());

           

            //v1/images/generations

            stringResourceAddress = TrimAll(Constants.ChatGPT_generations.Get());

           

            stringToken = TrimAll(Constants.ChatGPT_token.Get());

           

            HTTPConnection = New HTTPConnection(string_API_PATH,443,,,,, New OpenSSLSecureConnection);

                             

HTTPRequest = New HTTPRequest();

           

            HTTPRequest.ResourceAddress = stringResourceAddress;

           

            HTTPRequest.Headers.Insert("Content-Type",          "application/json");

            HTTPRequest.Headers.Insert("Authorization", "Bearer " + stringToken);

           

            JSONWriter = New JSONWriter;

            JSONWriter.SetString();

            WriteJSON(JSONWriter, structureBody);

            resultJSON = JSONWriter.Close();

           

            HTTPRequest.SetBodyFromString(resultJSON);

            HTTPResponse = HTTPConnection.Post(HTTPRequest);

           

            If HTTPResponse.StatusCode = 200 Then

                             

            JSONReader = New JSONReader;

            JSONReader.SetString(HTTPResponse.GetBodyAsString());

            ReturnParameters.Result = ReadJSON(JSONReader, False);

                             

            Else

                       ReturnParameters.Result = HTTPResponse.GetBodyAsString();

                       ReturnParameters.Error = True;

            EndIf;

           

            Return ReturnParameters;

           

EndFunction

It operates similarly to the function getFullDescriptionFromAPI we previously created. The only difference is in the function that contains the request body returnStructureBodyForImage. It is in this function that the description of what we pass as parameters to the neural network is set.

Source code for the function returnStructureBodyForImage:

&AtServer

Function returnStructureBodyForImage(stringProductDescription)

           

            structureBody = New Structure;

           

            structureBody.Insert("model",        "dall-e-3");

            structureBody.Insert("n",                1);

            structureBody.Insert("size",            "1024x1024");

            structureBody.Insert("style",           "natural");

           

            structureBody.Insert("prompt",       "Create a realistic advertising picture for " + stringProductDescription);

           

            Return structureBody;

           

EndFunction

The phrase Create a realistic advertising picture for... is the key one, and it is by this phrase that the neural network understands what we want to get from it.

After the neural network has responded to us and returned a link to the image file, the platform launches function downloadAndApplyPicture.

&AtServer

Function downloadAndApplyPicture(structureParameters)

           

            // structureReturn.Result.data[0].url

            string_API_PATH = "oaidalleapiprodscus.blob.core.windows.net";

            stringResourseAddress = Right(structureParameters.Result.data[0].url, StrLen(structureParameters.Result.data[0].url) - StrFind(structureParameters.Result.data[0].url, "windows.net/") - 11);

           

            HTTPConnection = New HTTPConnection(string_API_PATH,443,,,,, New OpenSSLSecureConnection);

           

            HTTPRequest = New HTTPRequest(stringResourseAddress);



            HTTPResponse = HTTPConnection.Get(HTTPRequest);

           

            binaryData = HTTPResponse.GetBodyAsBinaryData();

           

            Return binaryData;



EndFunction

Using the received URL, the function downloads the image and returns it to the calling function as binary data.

For users' convenience, we also need to create two more functions. One will save the product image in the desired format when the form is closed, and the other will display this image in the form when the catalog item form is opened.

The code for both functions is given below.

&AtServer

Procedure BeforeWriteAtServer(Cancel, CurrentObject, WriteParameters)

           

       If IsTempStorageURL(refToImage) Then



           CurrentObject.image = New ValueStorage(GetFromTempStorage(refToImage));



        EndIf;

           

EndProcedure



&AtServer

Procedure OnReadAtServer(CurrentObject)

           

            refToImage = PutToTempStorage(CurrentObject.image.Get(), UUID);

           

EndProcedure

At this stage, we are ready to test our code in action. So, we launch 1C in dialog mode and create a new product in the Productscatalog.

I could not resist the pleasure of creating products with the names Ford Mustang and Chevrolet Camaro :-)

 


 

Now, let's see what we've got. How do you like this Mustang?


Or such a Camaro?


 

Of course, it is nothing more than a teaching example of how you can use the capabilities provided by modern neural networks in our daily work. But even with such a simple example, I hope you realize there is nothing complicated in integrating applications written on the 1C:Enterprise platform and modern services.

This is basically it. Let’s sum things up up. We have created an application on the 1C:Enterprise platform that can interact with a neural network and make life easier for people involved in sales.

The source code of this application, as always, you can download at the link.

Be the first to know tips & tricks on business application development!

A confirmation e-mail has been sent to the e-mail address you provided .

Click the link in the e-mail to confirm and activate the subscription.