1C in AWS Cloud and AWS Lambda Use Cases

1C-Rarus

07.12.2020 18 min

Conceptual framework description of the Project

During one of our projects in South-East Asia, all IT systems had to be integrated into the Amazon Web Services (AWS) cloud according to the Client’s IT strategy. This requirement was also related to the management accounting system “1C: Management Information System” that is currently under development.

Generally, we use the AWS cloud as IaaS. Our virtual machines are located in the private AWS VPC cloud based on the AWS EC2 service.

aws-lambda-1.png

The specific feature of our system is that it mines the data from various systems and makes it available for the top management in the consolidated format.

aws-lambda-2.png

As a result, a great number of various integration interfaces are used in our system.

Integration with the MS SQL using the AWS Lambda and Amazon API Gateway serverless services became one of the most interesting ones.

aws-lambda-3.png

The MS SQL cluster is based on the Amazon Relational Database Service (Amazon RDS). We get the data from a specialized database (DB), i.e. the Read Replica. The queries sent to the Read Replica let us reduce the load on the main (Master Primary) DB.

The access to it is to be provided according to the requirements of the international standard on information security ISO 27001.

These requirements made us put aside the direct connection to this MS SQL DB as to the 1C external data source and resulted in the use of the API Gateway and Lambda functions for the integration.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs. The API Gateway service allows creating RESTful APIs using HTTP API or REST API, including incoming traffic (queries) management with the load limitation rules based on the number of requests per second for each HTTP method in the API.

AWS Lambda is a serverless computing service that runs your code in response to events and automatically manages the underlying compute resources for you.

In our case, such events were HTTP requests from 1C. Those requests were both quite easy with a pretty quick result generation, but at the same time complicated, requiring extended time for the selections and computations. It was the second type of queries that expected a special development and an asynchronous method of exchange for some “heavy” queries.

Implementing Lambda functional component

Selecting a language for the functions development

        
AWS Lambda lets you run code without provisioning or managing servers. Programming languages can be currently classified into 3 types: interpreted, compiled, and just-in-time (JIT) compilers.

The difference between an interpreted and a compiled language lies in the results of the process of interpreting or compiling the source code. In the first case, a special program (interpreter) performs all program instructions intermittently, converting each of them into machine code and executing them immediately. In the second case, the compiler processes the entire program text instantly, documenting the result as a file, ready for its subsequent multiple executions. JIT (Just-In-Time) compilers combine both approaches. At first, the compiler converts source code into an intermediate representation known as bytecode and saves it into the file, and then, in case of the file execution, the bytecode is converted into the machine code and executed.

The AWS Lambda service has integrated support both interpreted (script) languages:  Go, PowerShell, Python, Ruby, Node.js; and JIT compilers: Java, C# (.Net Core). Moreover, the service provides an API runtime environment for creating functions and using any other programming languages, including compiled ones. To use this API, a runtime environment, i.e. some code that can execute a developed Lambda function, has to be prepared. The runtime is loaded into the server along with the Lambda function.

In the case of using the script languages, the AWS Lambda enables you to edit source code directly in the service management web console.

aws-lambda-4.png

The C# (.Net Core Framework) language was selected as a language for the function development since the platform has a built-in library of the MS SQL DB access. Besides,  we’ve had experience in application development based on this platform. We’ve also considered Python as an alternative language; however, it required compilation for the MS SQL server connection, as well as the ODBC driver installation for Linux that, according to the feedback, could cause difficulties.

Engineering environmental infrastructure for the development

We used a Visual Studio Code for Windows when developing functions. This environment is cross-platform, so all of the following is true for Linux and macOS operating systems as well.

  • Preparing the environment, Project Set-Up

Install the AWS Toolkit for the VS Code.
Install the C# for Visual Studio Code.

To manage the AWS Lambda service from the command line, including the VS Code terminal, install a toolkit that enables you to manage the service from the developer’s workplace.

Install a toolkit of the AWS Command Line Interface (https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html)

Install dotnet and the AWS integration tools. This enables you to compile functions and publish them in the service using a dotnet command line.

Install the AWS toolkit for dotnet in the command line:

 dotnet tool  install -g Amazon.Lambda.Tools

Install the AWS project templates:

dotnet new  -i Amazon.Lambda.Templates

To publish functions in the AWS using the toolkit of the command line, you need to add an AWS user’s accounting data to the %USERPROFILE%\.aws\credentials  file at the developer’s workplace.

Create a new project from the Lambda functions template

dotnet new lambda.EmptyFunction --name Functions --profile  default --region ap-southeast-1

Open your project folder in the VS Code. At this moment there is already a project for the function development.

aws-lambda-5.png

The project uses the Gitlab https://gitlab.com software to store source code and manage project versions. Except for the version control, this system enables the implementation of the entire Continuous Integration/Continuous Development (CI/CD) chain, it also provides rich integration capabilities with various systems of software construction and testing, configuration and container environment management.

Create a new project in the gitlab web interface (not to be confused with the VS Code project):

aws-lambda-6.png

VS Code has a built-in experience of interaction with git, both in the graphical user interface and a built-in terminal. Initialize the repository by running the command:

git init

aws-lambda-7.png

Connect it to the project on GitLab:

git remote add origin https://mygitlaburl/awslambda 

Then we can add the modified files to the index (stage), fix changes (commit), and save (push) them in the repository on GitLab.

Project structure

The chart below represents the main project segments that have been implemented in the course of development.

aws-lambda-8.png

According to the requirements of the AWS-Lambda architecture, each function needs to have a specified method, i.e. a handler being an entry-point to this function accepting input parameters and returning a result. The FunctionHandles class contains a set of those methods (handlers) for the implemented Lambda functions. In turn, such methods lead to the methods of internal classes, performing certain operations.

DBConnector class has been developed for the  MS SQL interaction. It has a single generic method RunQuery that is designed for executing queries to the DB.

The ConnectionFactory Class provides a connection object with a certain DB.

Depending on the situation, SQL queries to the DB can use different filtering options. Therefore, the 2 following classes were created for their storage: Queries that contain invariable parts of the query and Filters that keep filtering options.

2 types of Lambda functions are used in the project - synchronous and asynchronous. They differ in the methods of processing external requests. However, they have a common method of receiving data.  Hence, 3 classes have been developed to implement all handlers:

  • LambdaBaseHandler is a set of methods and properties, common for all functions. It generates queries to the DB, transfers them for processing to the DBConnector class, and returns the result.
  • LambdaSyncHandler being a descendant of the LambdaBaseHandler contains properties and methods, which are specific for synchronous functions. It processes input values, brings forth the required methods of the basic class, converts a result, and returns it.
  • LamdaAsyncHandler, being a descendant of the LambdaBaseHandler, completes the same tasks as the  LambdaSyncHandler class for the asynchronous functions.

In order to exchange data with the client-end part of the project, model classes have been developed. They represent entities of the DB domain. One of such models — Airport — is presented on the chart.

The final project is presented in the following way:

aws-lambda-9.png

For ease of convenience, the project classes are grouped into folders according to their functionality.

The “Db” folder has classes ensuring the interaction with the DB. The “Handlers” folder contains classes implementing entry points into the Lambda functions, providing the conversion of incoming and outgoing data and DB interaction method calls. The “Models” folder contains classes of the domain entity models.

Loading functions to AWS

The project source code should be compiled into the software modules and zipped in order to load functions to the AWS Lambda service. For this purpose, use the Lambda extension for the dotnet CLI:

dotnet-lambda  package

In the .\bin\Release\netcoreapp3.1\ folder, a Functions.zip file will be created that contains all software modules and project-specific dependencies.

Create a new function in the Lambda console having specified its name and runtime environment.

aws-lambda-10.png

Open the function settings and load Functions.zip file in the Function code section:

aws-lambda-11.png

In the Basic settings section, specify the Handler in the following way: assembly::namespace.class-name::method-name, where “assembly” is a full name of the software module .net, “namespace.class-name” is the name of the namespace, which the function handler belongs to, and “method-name” is a method itself that is brought forth at the function run.

Furthermore, in the IAM service, you need to create a role having all permissions required for the Lambda function run and select it in the Existing role drop-down list.

The AWS Identity and Access Management (IAM) service authorizes you to control access to AWS services and resources. With IAM, you can create AWS users and groups, manage them, and use permissions to provide or deny access to the AWS resources.

To publish functions in the AWS, you can use the AWS extension for dotnet CLI by running the command:

dotnet lambda  deploy-function 

During the process, you will need to specify the same information as when creating a function in the web interface.

The AWS Lambda service uses an event-driven architecture. In this model, events are used to activate applications and services.

An event-driven model has three key components: an event producer, an event router, and an event consumer. Events, being data structures of a certain format, are generated by a producer and transferred to the router that filters and circulates them to appropriate consumers. The AWS cloud supports many different events such as the API gateway requests, CloudWatch logging, SNS messages, etc.

In our case, the API Gateway is an event producer and an event router (routes requests to the service), setting up the correspondence between external HTTP requests and Lambda functions, which are event consumers.

To activate the function with external events, you need to connect one or several possible triggers (event producers).

aws-lambda-12.png

In our case, the API Gateway is used. It can be connected to the existing API or you can create a new one. As a result, we receive a link that we can use to activate our Lambda function from outside.

aws-lambda-13.png

Debugging functions

Debugging, being a process of identifying and removing errors in the software code, involves reflection and logical interpretation of the given information about the bug. Even the debugging of common “monolithic” applications can take rather much time. In the case of serverless applications, the manpower effort for debugging will be increased due to the complexity of the software architecture that can involve many different services.

Debugging functions were executed in several steps. At an early testing stage, in a test environment, we developed a common Web API for the local database. This lets us quickly debug SQL queries, develop methods for data acquisition and processing, and define the configuration for the external points of the function request. In particular, it was established,  that for some functions, large arrays of entry parameters were transferred. They did not meet the GET request limit that resulted in the execution of POST requests, and a change of the DB query process order, since the number of the query parameters also exceeded the limit.

When developing Lambda functions, we used local debugging and testing methods. The simplest among them was debugging in the form of the console-based application, where the Lambda functions were called with the transfer of entry parameters.

The second way that we will focus on is the use of the AWS .NET Mock Lambda Test Tool (https://github.com/aws/aws-lambda-dotnet/tree/master/Tools/LambdaTestTool).

For the installation, run the VS Code in the terminal window:

dotnet tool install -g  Amazon.Lambda.TestTool-3.1

Go to the debug interface and select “Add configuration” in the drop-down list to the right of the “Start debugging button”. In the editing window, the  launch.json file will be opened. Finalize it in the following way. The “program” parameter contains a path to the installed debugging tool.

{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
        
        {
            "name": ".NET Core Launch (console)",
            "type": "coreclr",
            "request": "launch",
                  "preLaunchTask": "build",
            "program": "${env:USERPROFILE}/.dotnet/tools/dotnet-lambda-test-tool-3.1.exe",
            "args": [],
            "cwd": "${workspaceFolder}",
            "console": "internalConsole",
            "stopAtEntry": false
        },
        {
            "name": ".NET Core Attach",
            "type": "coreclr",
            "request": "attach",
            "processId": "${command:pickProcess}"
        }
    ]
}

Save the file and start a debugger. We’ll receive an error “Could not find the task ‘Build”. In the error window, press the “Configure task” button, select “Create tasks.json from template”, and then select a “NET Core” template. In the editing window, the tasks.json file will be opened. Finalize it in the following way:

{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
    "version": "2.0.0",
    "tasks": [
        {
            "label": "build",
            "command": "dotnet",
            "type": "shell",
            "args": [
                "build"
            ],
            "group": "build",
            "presentation": {
                "reveal": "silent"
            },
            "problemMatcher": "$msCompile"
        }
    ]
}	
public string FunctionHandler(string input, ILambdaContext context)

Start a debugger. In the browser window, AWS .NET Mock Lambda Test Tool will be opened.

aws-lambda-14.png

This tool allows us to generate all required entry parameters and start the function execution. For this purpose, you can use all standard debugging tools in the VS Code (breakpoints, variable inspection, etc.).

Debugging functions in the production environment, i.e. directly in the AWS cloud, comprise a more difficult task. In this project, we used built-in logging capabilities, provided by the AWS service.

The Lambda function handler can accept a calling context object with the ILambdaContext interface as an argument.


public string FunctionHandler(string input, ILambdaContext context)

protected void LogMessage(string message)
        {
            context.Logger.LogLine(string.Format("{0}:{1} - {2}", context.AwsRequestId, context.FunctionName, message));
        }
    }

The context object provides some information about the function, e.g. its name, version, and available memory limit, etc., as well as the Logger object. An example of its use is shown above. The LogMessage function accepts a string input parameter and writes it in the logging system, adding an AWS call id and a function name. Hence, you can track the execution of main function operations, and check the variate values.

The log data can be viewed in the CloudWatch service.

aws-lambda-15.png

The Function mechanism

Microsoft.Data.SqlClient library is used to receive access to the MS SQL DB, being a data service provider of the .NET platform for SQL, as well as an ORM (Object-Relational Mapping) Dapper library (https://github.com/StackExchange/Dapper).

ORM (Object-Relational Mapping) is a programming technique for converting data between incompatible type systems using object-oriented programming languages. This technique allows us to work with DB tables as programming language classes.

The Dapper library was developed by the StackOverflow team. It is a set of extension methods for the classes of NET ADO data providers. It can be used not only with MS SQL but also with SQLite, Firebird, Oracle, MySQL, PostgreSQL, etc. Dapper is inferior to some “large” ORMs (e.g., Entity Framework, NHibernate) in terms of functionality, however, it supersedes them in its performance.

In this project, a generic method of the Query<T> extension is used. It accepts the text of the SQL query and converts the result into the data model.

An example of the SQL query for the GetAirports function is presented below:

SELECT 
    SUBSTRING(Airports.str_Ident, 1, 3) AS Code,
    Airports.str_Name AS Description,
    Airports.lng_Airport_Id_Nmbr AS PSS_ID
FROM
    tbl_Airport AS Airports

As a trigger of the Lambda function, the API Gateway was selected that has a limitation for the execution of the code called, whose maximum value is 30 seconds. During the testing procedure, it had been found that not all functions can be  performed within this time limit, that’s why we took a decision to execute a part of the functions asynchronously, passing the result using a callback method. The synchronous functions are integrated with the API Gateway using Lambda Proxy that lets you work inside the function with a “raw” query and also generate a response. In case of an asynchronous function call, Lambda Proxy is not available, so an external integration with the API Gateway has to be adjusted.

Let's take a look at the mechanism of a synchronous function using the GetAirports example.

public APIGatewayProxyResponse GetAirportsHandler(APIGatewayProxyRequest request,   
  ILambdaContext context)
  {
         
  var Handler = new LambdaSyncHandler( context);
         return Handler.ProcessRequest<Airport, string>(Queries.GetAirports, request.Body, Filters.AirportsFilterByKeys, Filters.AirportsFilterCommon);
  }

The function accepts the ‘APIGatewayProxyRequest’ object as an entry parameter. Its “body” property contains a set of query parameters in JSON format.

After that, an object instance of the internal ‘LambdaSyncHandler’ is created that is used to process all synchronous queries, and its ‘ProcessRequest’ method is executed.

public APIGatewayProxyResponse ProcessRequest(string qry, string param, string keysFilter, string commonFilter = "", string defaultFilter = "")
{
LogMessage("Request processing started");
int statusCode;
string body;
T1[] searchKeys = ParseBody(param);
try
{
                //Method call of certain class handler
  body = JsonConvert.SerializeObject(GetDataByKeysArray(qry, searchKeys, 
keysFilter, commonFilter, defaultFilter);

      statusCode = (int)HttpStatusCode.OK;
      LogMessage("Processing request succeeded.");
}
catch (Exception e)
{
       statusCode = (int)HttpStatusCode.InternalServerError;
       body = e.Message;
  LogMessage(string.Format("Processing request failed - {0}", JsonConvert.SerializeObject(e)));
 }
return CreateResponse(statusCode, body);
}

This is a generic method that allows processing parameters of different classes in the same way. As entry parameters, it accepts a string of the main query ‘qry’, json string param, keysFilter, commonFilter, defaultFilter strings, which are used for the generation of the WHERE-section of the SQL query. The T-class parameter specifies  an object class returned by the method, and the T1-class is a class of entry parameters.

The param string is converted into the parameter array of a certain type. In our case, it is a string. After that, the GetDataByKeysArray method is called. It returns the set of T-class objects. In this case, it is ‘Airport’. Then this set is converted into the JSON (body) string and the result is built-up in the CreateResponse method. In case of an error, an ‘Exception’ serialized object is written to the variable ‘body’.

In the GetDataByKeysArray method, a DBConnector object is created and its ‘RunQuery’ method is called.

public IEnumerable RunQuery(string qry,T1[] searchKeys, string keysFilter, string commonFilter = "", string defaultFilter = "")
        {
            List result = new List();

            string filter = commonFilter;

            using (var conn = _connectionFactory.GetSQLConnection())
            {
                if ((searchKeys != null) && (searchKeys.Length > 0))
                {
                    filter += keysFilter;

                    qry += filter;
			  //converting array parameters into arrays by 2000 
                    T1[][] SearchKeysSplited = searchKeys
                  //converting source array into set of anonymous object
                  .Select((s, i) => new { Value = s, Index = i })
                  //Grouping by Index field by 2000.
			.GroupBy(x => x.Index / 2000)
                   //converting each group into array
                   .Select(grp => grp.Select(x => x.Value).ToArray())
                   //converting array set into array
                   .ToArray();

                    foreach (var keys in SearchKeysSplited)
                    {

                        var DapperParam = new Dictionary();

                        DapperParam.Add("searchkeys_param", keys);

                        
                        result.AddRange(conn.Query(qry, DapperParam, commandTimeout: _commandTimeout));
                    }
                }
                else
                {
                    filter += keysFilter;
                    qry += filter;

result.AddRange(conn.Query(qry, new Dictionary(), commandTimeout: _commandTimeout));
                }
            }
            return result;
       }

In this method, the SQL query is built up from the main text and additional filters ‘WHERE’, query parameters are processed, and the query is executed.

Since in this case there is a limit in the number of parameters (2100) in the SQL query, the parameters are processed sequentially in several queries. The LINQ technology is applied to divide parameters into groups by 2000.

LINQ (Language-Integrated Query) is a uniform query syntax to retrieve data from different sources. As a data source, there might be an object implementing an ‘IEnumerable’ interface (e.g. standard collections, arrays), data sets, and an XML document. Regardless of the source type, LINQ lets you apply the same approach for the data selection.

In our case, LINQ to objects is used with the syntax for lambda expressions (it is not related to the AWS Lambda, they only have similar names) on the array.

//converting array parameters into arrays by  2000 
  T1[][] SearchKeysSplited = searchKeys
  //converting source array into set of  anonymous object
  .Select((s, i) => new { Value = s, Index = i })
  //Grouping by Index field by 2000.
  .GroupBy(x => x.Index / 2000)
  //converting each group into array
  .Select(grp => grp.Select(x => x.Value).ToArray())
  //converting array set into array
  .ToArray();

As a result, we get a 2-dimensional array with the elements of no more than 2000 parameters.

By processing each of the arrays in the loop, we’re executing queries to the DB.

foreach (var keys in SearchKeysSplited)
{
var DapperParam = new Dictionary();
DapperParam.Add("searchkeys_param", keys);
result.AddRange(conn.Query(qry, DapperParam, commandTimeout: _commandTimeout));

A generic ‘Query’ method is an extension of the ‘SqlConnection’ class, plugged in by ORM Dapper which allows an automatic conversion of DB entities into the project models. Received data is sent back by stack and returned to the API Gateway in the ‘body property of the ’APIGatewayProxyResponse object.

Objects are automatically serialized and deserialized with the Amazon.Lambda.Serialization.Json.JsonSerializer class, linked to the software module level.

[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

Let’s consider the difference between asynchronous and synchronous methods.

Firstly, functions called from the AWS API Gateway asynchronously cannot use Lambda Proxy. For this reason, it is required to configure the integration.

Secondly, all data received from the DB should be sent to 1C, while being connected to the HTTP service. To ensure the interaction with 1C and the identification of data units, it is also required to accept the BatchID parameter from the query header and return it in response.

The integration of the Lambda functions with the API Gateway is configured in the web console of the API Gateway service management.

aws-lambda-16a.png

To convert incoming queries into the Lambda function parameters, we’re using the transformation templates with Velocity script  (https://ru.wikipedia.org/wiki/Apache_Velocity).

aws-lambda-17.png

In this template, entry parameters for the Lambda function are removed from the query body and, along with the BatchID, transferred to the method entrance, where they are deserialized into the SearchKeysRequest<string> object (in case of the GetCharges function).

The handler of the asynchronous function appears as follows:


public void GetChargesHandler(SearchKeysRequest req, ILambdaContext context
{
var Handler = new LambdaAsyncHandler(context);
try
{
          Handler.ProcessWithCallBack(req.SearchKeys, Queries.GetCharges,        Filters.ChargesFilterByKeys);
}
catch (Exception e)
{
    context.Logger.LogLine("ERROR:" + JsonConvert.SerializeObject(e));
}
}

In general, the work of synchronous functions does not differ from the work of asynchronous functions, except for the method of sending data to the Client. Synchronous functions are sending them with the API Gateway, while asynchronous functions use a SendData method.


public void SendData(string data, IDictionaryheaders=null) {
try{
   using (var cli = new HttpClient()){
      cli.Timeout = TimeSpan.FromMinutes(30);
//authorization Basic
var byteArray = Encoding.ASCII.GetBytes(_callbackAuthUser + ":" +_callbackAuthPass);
//adding authorization headers
cli.DefaultRequestHeaders.Authorization = 
new System.Net.Http.Headers.AuthenticationHeaderValue(
"Basic", Convert.ToBase64String(byteArray));
// preparing http message
var msg = new HttpRequestMessage(HttpMethod.Post, _callbackUri);
msg.Content = new StringContent(data, Encoding.UTF8, "application/json");
//adding function names to headers
msg.Headers.Add("function", FunctionName);
//adding external data (BatchID)to headers
if(headers != null){
    foreach (var header in headers) {
       msg.Headers.Add(header.Key, header.Value);
    }
}
//Sending
using (var res = cli.SendAsync(msg).GetAwaiter().GetResult()){
    LogMessage("Callback response code: ." + res.StatusCode.ToString());
}
}
}
catch (Exception e){
   LogMessage("Data sending error:" + JsonConvert.SerializeObject(e));
}
}

Calling Lambda functions from 1C

We’ve faced the task to create 17 integrations. We had to call each of them by the HTTP request through the URL provided. Initially, these were GET requests, but very soon we’ve faced a limitation of the GET request when we had to pass a large array of parameters. Furthermore, we started to transform such requests into POST, and subsequently, for consistency purposes, we have transformed all integrations into POST.

This is how an HTTP request call, which is addressed by all integrations, looks like in a common module.

HTTPConnection = New HTTPConnection(Host, Port,,,,, New OpenSSLSecureConnection);
  ResourceAddress = MethodName + GETParameters;
  
  Headers = New Map;
  If BatchID <> Undefined Then
       Headers.Insert("BatchID", "" + BatchID);
  EndIf;
  If POSTParameters = Undefined Then 
         HTTPRequest = New HTTPRequest(ResourceAddress, Headers); 
         HTTPResponse = HTTPConnection.Get(HTTPRequest); 
    Else 
         Headers.Insert("Content-Type", "application/json");
         HTTPRequest = New HTTPRequest(ResourceAddress, Headers);
         HTTPRequest.SetBodyFromString(POSTParameters);
         HTTPResponse = HTTPConnection.Post(HTTPRequest);
    EndIf;
          
  ResponseString = HTTPResponse.GetBodyAsString();
  • BatchID is a unique identifier for logging the process of performing operations in our system which keeps records of sent, received, and processed requests. It proved to be helpful for asynchronous queries as a way to keep instructions for the data that we expect to receive from AWS through the 1C web service.
  • POSTParameters stand for parameters for the receiving party. In our case, they are in the JSON format with the use of the ISO 8601 standard for the date format (yyyy-MM-ddThh:mm:ss).

Other variables are used in a standard way for similar operations:

  • Host — string — server host, with which the connection is established.
  • Port — number — server port, with which the connection is established.
  • ResourceAddress — string — Address of the recourse, to which the http request will be made. Notice: “getBalance?Currency=USD”.
  • Headers — Map — contains headers of the query as a map between the "Header name" and "Value".

In case of the successful execution, JSON containing a required data set is returned in the response body.


[
	{
		"Code": "CTS",
		"Description": "New Sapporo",
		"PSS_ID": "110"
	},
	. . .
	{
		"Code": "DAT",
		"Description": "Datong Yungang",
		"PSS_ID": "121"
	}
]

After the execution of the first test cycle, it has been found that only 10 integrations returned with a result and the other 7 failed under the 30 second API Gateway timeout (described in the “Implementing Lambda functional component: Function mechanism” section). Hence, as an option, it was proposed to execute asynchronous calls for such “painful” integrations.

aws-lambda-18a.png

aws-lambda-19a.png

It appears for 1C as follows:

We sent a POST query, however, the reply contained a notification “Query accepted, processing” instead of the data.
The HTTP service was developed and published on the webserver.

aws-lambda-20.png

As soon as it was available, an inverse function addressed 1C, having transferred a requested data set to it. It means that the same occurrence is happening, only postponed. Having sent a query, 1C is not waiting for a response. It comes asynchronously to the HTTP service.

Since sending a query and receiving data are not interrelated to each other in any way, and when receiving a 1C response, you need to know what query the response belongs to, the BatchID (UUID) is always transferred as a parameter of the call to the asynchronous function, which is assigned by 1C and is contained in the return response.

Examples of asynchronous and synchronous calls  in user business processes

Synchronous call as in the GetAirports case

The function returns the airport table in several hundreds of strings, and, in case of correct channels, it returns a result reliably and quickly upon all architecture, even if the whole table is requested.

Within our solution, if the parameters are not passed, it means that all data should be returned, and if the parameters are filled in, the data will be returned on the keys passed.

aws-lambda-21a.png

Upon a request, a table containing received data is generated dynamically for synchronous calls.

Asynchronous call as in the GetPSSCheckInData case

The start date, end date, and Batch ID are passed as parameters.

aws-lambda-22a.png

For asynchronous calls, a fragment of the data import log (described below) is reflected with the specification of what data are configuration objects loaded with (catalogs, documents, information registers) and a detailed description of events occurred in processing.

Lifehacks

Data processor for testing and debugging

1C Postman is its alternative name.

Before using the received data as intended, there will be many iterations of testing, debugging, and modifications. To simplify the process, we developed data processors for almost all iterations that can be parameterized to visually present all data sets received. The best practice is to include such data processors in the configuration, as other team members, in particular, use them to answer the questions:

  1. How can we see any changes in the data passed to us?
  2. What is the other data that we can use?
  3. How does the “raw” data that we receive look like?

Data import log

For analysis purposes, we have included a detailed data import log in all integrations. It will help us respond more effectively to the question “What went wrong?”.

It looks like a journal with a hierarchical list of events. You can view it both thoroughly  or contextually by starting from the created object.

aws-lambda-23.png

Import queue

This idea was born out of the implementation of asynchronous functions in order not to overload a third-party server with a great number of queries. The calls are queued up. Every other query is sent only after the previous one has been completed.

In particular, we are planning to use queues in case of the batch data loading, when, e.g. you need to import data for a year (by days) for 3 integrations. 3*365 tasks will be queued up in the integration priority order and in date ascending order.

Conclusion

The world is changing. The goal of the article was to show some changes in the world order and provide some kind of a manual on how to live in the changing world. Except for the manual itself, it is quite important to see that the space in which 1C lives these days has significantly expanded its borders. The changes in the system of things require demands on the understanding of the 1C architecture, its environment, safety and security, and software management. We should not fail to mention the essential need for absorbing new knowledge and practices. Working in the cloud as a continuing trend in the IT industry results in the fact that 1C is no longer an isolated ecosystem. 1C experts are starting to work side by side with DevOps specialists, C#, Python programmers, and other programming languages.

About 1C-Rarus

The 1C-Rarus Company Group, a joint venture of 1C and Rarus, was founded in 1994. For more than 26 years of the 1C‑Rarus operation, over 150,000 companies in Russia and the CIS countries as well as subdivisions of the largest global companies have become 1C-Rarus customers. 1C‑Rarus offices are open in 6 countries. Representative offices are located in: Russia, Germany, Belarus, Ukraine, Uzbekistan, Vietnam.

The 1C-Rarus Group employs over 2,700 specialists, most of them are certified by the 1C company. The 1C‑Rarus management system complies with the international quality standard ISO 9001:2015.

The original version of the article:

https://rarus.ru/en/news/1c-in-aws-cloud-and-aws-lambda-use-cases/

Be the first to know tips & tricks on business application development!

A confirmation e-mail has been sent to the e-mail address you provided .

Click the link in the e-mail to confirm and activate the subscription.