About workflow

1C Developer team

16.05.2019 20 min

In this article, we demonstrate how we structure the workflow for the 1C:Enterprise platform, we show the way we perform quality assurance, and we also share with you some of the lessons that we’ve gained from creating one of the most popular software systems in Eastern Europe.

People and processes

Several groups of up to 10 programmers each are busy working on the platform. Three quarters of them write in C ++, while the rest of them write in Java and JavaScript.

Each group focuses on a separate line of development, for example:
  • Development tools (Designer)
  • Web client
  • Server infrastructure and failover cluster
  • and more
There are more than a dozen groups. There is also a dedicated quality assurance group.

Of course, on a project of this size (over 10 million lines of code), there's no use discussing joint code ownership, since one could never account for such a vast amount. We're trying to provide the "bus factor" in a group of two or more.

We try to maintain a balance when it comes to team autonomy, which provides flexibility while increasing development speed, and uniformity, which ensures effective inter-team communication, as well as effective interactions with the outside world. In the end, we have a common version control system, the build server, and task tracker (more on them below) as well as the C++ coding standards, project documentation templates, regulations for handling bug reports originating from users, and other aspects. Each team must follow the rules, developed and adopted by the group leaders' general consensus.

At the same time, in "inwardly" aimed practices, the teams have quite a bit of autonomy. For example, code reviews are used in all the commands (and there are general rules that define whether a review is required), but their internal rules were introduced at different times and therefore may differ.

The same applies to the workflow. Some developers use Agile software, while others employ other project management styles. The canonical SCRUM, it seems, is nowhere to be found—the specificity of a boxed product imposes its own limitations. For example, a remarkable practice of demonstration may turn out to be inapplicable in intact form. Other practices, such as the role of Product Owner, are comparable with some of the things we have. The team leader usually acts as the Product Owner in their field. In addition to technical leadership, one of the most important tasks for a team is deciding on the future direction of development. The strategy formulation and platform development tactics is an interesting, complex subject, and we've devoted an entire article to it.

Working on tasks

When a decision is made to implement a function, its profile is determined in a series of discussions, which involve a minimum of the developer responsible for the task and the team leader. Other team members are often brought in, or members of other groups with the required expertise. The final version is then approved by the leadership of the 1C:Enterprise platform development.

The decisions made in these discussions cover:
  • What is and isn't included in the scope of the task
  • How we see the usage scenario. Even more important is an understanding of what potential scenarios we won't be supporting
  • How the user interfaces will look
  • How the API for the application developer will look
  • How the new functionality will be combined with the existing functionalities
  • How it will work with security
  • and more
In addition, the last time we tried to discuss a task with a broader range of potential customers. For example, at the last workshop, we talked about new options for working with binary data that are still in the design stage, answered questions, and managed to put together a number of potential usage scenarios from the discussion that no one had come up with before.

When work is first started on a new function, a task is created for it in the task tracker. The tracker, by the way, is written in 1C:Enterprise and is simply called Task Database. A project document is stored for each task in the task tracker, which in essence is the specification for the task. It comprises three main parts:
  • An analysis of the problem and possible solutions
  • A description of the solution to be implemented
  • A description of the technical details of the solution implementation
The project document can be prepared before its implementation, or it could be started afterward if the task requires research or a prototype. In any case, this is an iterative process, not like a waterfall model; the development and refinement of the project document are carried out in tandem with its implementation. The main thing is that when the task approaches its completion, the project document must be approved in every single detail. And the details may be numerous, for example:
  • Terminology must be unified. If the term "Save" is used somewhere in the Platform in a similar situation, then there needs to be a serious justification to use the term "Write".
  • Approaches must be unified. Sometimes, for the sake of simplifying research and the consistency of user experience, old approaches are required to be repeated in new tasks, even if there are obvious disadvantages in using them.
  • Compatibility. In cases where it is impossible to maintain old behavior, we should still keep compatibility in mind. Often, applications can include workarounds for some issues, and a serious change will entail inoperability on the side of end-users. Therefore, we often retain an old behavior in "compatibility mode". Existing configurations running on the new release of the platform will feature "compatibility mode" until their developer makes a conscious decision to quit using it.
In addition, the project features a summary of a discussion of the task so that later one can understand why some option or other was accepted or rejected.

Once the draft is approved and the developer has implemented the new functionality in the feature branch in SVN (or in Git, if the development is performed in the new IDE), the task must pass code inspection and manual testing by other members of the group. On top of that, automated tests are run on the feature branch, as described below. On this stage, another technical document is created: a description of the task, aimed at testers and technical writers. In contrast to the draft, the document does not contain technical details of the implementation, rather it is designed to enable a quick grasp of which parts of the documentation must be improved, whether the new feature is accompanied by incompatible changes, etc. The approved and corrected task is incorporated into the main branch of the release and is made available to the test group.

Lessons and recipes
  • The value of the design document, as any documentation, is not always particularly obvious. For us, the following is what gives it its value:
    • During the design process, it helps everyone involved reestablish the context of the discussion and ensure that the decisions that have been made will not be neglected or distorted.
    • Later, in doubtful cases where we are not sure of the proper behavior, the project document helps us recall the decision itself and the grounds for adopting it.
    • The project document is the starting point for user documentation. Developers don't need to write anything from scratch or orally explain anything to the technical writers because the project document serves as a basis.
  • We should always describe usage scenarios for the functionality created, and not in generalities, but in detail: the more, the better. If this is not done, the resulting solution might be difficult or even impossible to work with, and this could all happen because of a minor detail. In Agile development, such details are easy to fix in the next iteration, but in our case a fix might take years (complete cycle: final version of the platform-> the configuration using its innovations is released -> user feedback is collected -> corrections are implemented -> a new version is released -> the configuration is updated based on the corrections -> the user migrates to a new version of the configuration).
  • Even better than scenarios, what really comes in handy is a prototype used by real users (configuration developers) before the version is officially released and the behavior is set. We're just beginning to broaden this practice, and in almost all cases this has resulted in valuable knowledge. Often, this knowledge might not arise from functionality, but rather apply to non-functional behavior (e.g., logging or ease of error diagnostics).
  • In the same vein, performance criteria need to be determined in advance, and the compliance with these criteria needs to be verified. Before we added this to the task acceptance checklist, sometimes we skipped that part.
Quality assurance

In general, "quality" and "quality assurance" are very broad terms. At least two processes can be distinguished among them: verification and validation. Verification usually refers to software behavior’s compliance with the specifications and the absence of other obvious errors, while validation refers to verifying compliance with the user’s needs. In this section, we will focus on quality assurance in terms of verification.

Testers have access to the task only after it is added to the main branch, but the quality assurance process begins much earlier. Recently, we had to invest considerable efforts in improving it, because it became apparent that the existing mechanisms were no longer adequate for the increased volume of functionalities, and their markedly increased complexity. These efforts, in the opinion of the 1C:Enterprise partners regarding version 8.3.6, have already produced results, but a lot of work, of course, still lies ahead.

Existing mechanisms for quality assurance can be categorized as organizational or technological. Let's start with the latter.

Tests

When it comes to quality assurance mechanisms, tests are what immediately come to mind. We, of course, use them as well, and in several different ways:

Unit tests

We write unit tests in C++. As mentioned in the previous article, we use derivative versions of Google Test and Google Mock. For example, a typical test that checks the screening of the ampersand symbol ("&") in JSON writing might look like this:
TEST(TestEscaping, EscapeAmpersand)
{
    // Arrange
    IFileExPtr file = create_instance<ITempFile>(SCOM_CLSIDOF(TempFile));
    JSONWriterSettings settings;
    settings.escapeAmpersand = true;
    settings.newLineSymbols = eJSONNewLineSymbolsNone;
    JSONStreamWriter::Ptr writer = create_json_writer(file, &settings);
    // Act
    writer->writeStartObject();
    writer->writePropertyName(L"_&_Prop");
    writer->writeStringValue(L"_&_Value");
    writer->writeEndObject();
    writer->close();
    // Assert
    std::wstring result = helpers::read_from_file(file);
    std::wstring expected = std::wstring(L"{\"_\\u0026_Prop\":\"_\\u0026_Value\"}");
    ASSERT_EQ(expected, result);
}

Integrated tests

The next level of testing includes integration tests written in 1C:Enterprise. They are what comprises the bulk of our tests. A typical test suite is a single information database stored in a * .dt file. The test infrastructure loads this database and invokes a pre-known method in it, which invokes separate tests written by developers, and formats the results to enable their interpretation by the CI (Continuous Integration) infrastructure.
&AtServer
Procedure test_Array_Simple() Export
     FileName = GetTempFileName("json");
     ReferenceName = "reference_Array_Simple";
     Value = CommonModule.GetSimpleArray();
   
     JSONWriter = GetOpenJSONWriter(FileName);  
   
     WriteJSON(JSONWriter, Value);
   
     JSONWriter.Close();
   
     CommonModule.CompareFileWithReference(FileName, ReferenceName);
EndProcedure

In this case, if the result of writing does not match a reference, an exception is thrown. The infrastructure intercepts and interprets it as a test failure.

Our CI system performs these tests for different versions of operating systems and DBMS, including 32- and 64-bit Windows and Linux, as well as MS SQL Server, Oracle, PostgreSQL, IBM DB2, and our proprietary file database.

Custom test systems

The third and most complicated form of tests is the so-called custom test systems. They are used when scenarios being tested extend beyond a single 1C base, for example, when testing interaction with external systems through web services. For each test group, one or more virtual machines are allocated, and a special agent software is installed on each machine. In other respects, the test developer has complete freedom and is limited only by the requirement to issue the result as a file in a Google Test format that can be read by the CI system.

For example, a service written in C# is used to test a SOAP web service client, while a massive testing framework written in Python is used to test various Designer features.

The flip side of this freedom is the necessity for manual test settings for each operating system along with managing a fleet of virtual machines and other overhead costs. Therefore, with the development of our integration tests (described in the previous section), we will limit the use of custom test systems.

The above tests are written by platform developers, in C++ or by creating small configurations (applications) designed to test specific functionalities. This is a necessary requirement for eliminating errors, but it's not enough, especially in a system like the 1C:Enterprise platform, where most of the features are not applied (used directly by the user), rather they serve as the foundation for building applications. Therefore, there exists an additional echelon of tests: automated and manual test scripts for real applications. This group includes stress tests, which is a very big topic, and an interesting one at that, which is why we'll be dedicating a separate article to it.

Thus, all kinds of tests are carried out using CI. Jenkins is used as a continuous integration server.

For each build configuration (Windows x86 and x64, Linux x86 and x64), a build task is set. These tasks run parallel on different machines. Building a configuration takes a long time because even on powerful hardware compilation and linking of large volumes of C++ code is no easy task. In addition, the creation of packages for Linux (deb and rpm) turns out to be comparable to compilations in terms of time as well.

Thus, a "shortened build cycle" works in the course of a day, which verifies compilations for Windows x86 and Linux x64 and executes the minimum battery of tests, and a regular build cycle runs every night, it builds all configurations and drives all the tests. Each night's build that is built and tested is marked with a tag so that the developer while creating a branch for the task or applying changes from the main branch, can be confident that they’re working with a compiled and workable copy. Currently, we are working to ensure that a regular build cycle is launched more frequently and includes more tests. The ultimate goal of this work is to detect errors through testing (if they can be detected by tests) within two hours after the commit so that any error that is detected is corrected before the end of the workday. This response time dramatically increases efficiency. Firstly, the developer themselves do not need to restore the context they were working with when the error was introduced; secondly, this lowers the likelihood that the error will hinder other work in progress as well.

Static and dynamic analysis

Man does not live by tests alone! We also use static code analysis, which has proved its effectiveness over the course of many years. Once a week we locate at least one error, and often it's the kind of error that can't be detected through perfunctory testing.

We use three types of analysis tools:
  • CppCheck
  • PVS-Studio
  • Microsoft Visual Studio built-in tool
They all work a little differently and locate different types of errors, so we like the way they complement each other.

In addition to the static methods, we also check the behavior of the system at runtime using Address Sanitizer (part of the CLang project) and Valgrind.
These two radically different tools are generally used for the same thing: to find memory-related malfunctions, such as:
  • accessing uninitialized memory
  • accessing cleared memory
  • exiting beyond the array boundaries, etc.
Several different times, a dynamic analysis managed to find errors that escaped beyond the extensive efforts to find them manually. This was the impetus for organizing an automated batch run of certain groups of tests with dynamic analysis enabled. The continuous use of dynamic analysis for all test groups is not possible due to performance limitations: Memory Sanitizer cuts performance by about 3 times, and Valgrind cuts performance by 1-2 orders of magnitude! But even their limited use yields good results for us.

Organizational quality assurance measures

In addition to automatic tests performed by machines, we try to build quality assurance in the daily development process.

The most widely used practice for this purpose is peer code review. In our experience, constant code inspections don't catch specific errors very often (although it occasionally does happen), but they do prevent them from coming up by providing a more readable and well-organized code, i.e., they ensure quality over the long run.
Other goals involve manual checks of each other's work within a group of programmers—it turns out that even cursory testing by someone who isn't immersed in the task helps in identifying errors early on, even before the task is wrapped up.

Eat your own dog food

But the most effective of all organizational measures is an approach that Microsoft calls "eating your own dog food," where the product developers are the people who are going to be the first users. In our case, the "product" is our task tracker (the aforementioned "Task Database") because the developer uses it during the day. Every day, this configuration is migrated to the last version of the platform built based on the CI principles, and all the flaws and shortcomings make an immediate impression on their authors.

We would like to emphasize that the Task Database is a serious information system that stores information on tens of thousands of tasks and issues, and has over a hundred users. It's not comparable to the largest implementations of 1C:Enterprise, but it is comparable to a medium-sized company. Of course, not all mechanisms can be checked in this manner (for example, the accounting subsystem cannot), but in order to increase the coverage of functionalities checked, there's a consensus that different groups of developers use different connection methods; for example, some use the web client, others use the thin client on Windows, and still others use Linux. In addition, multiple instances of the task database server running in different configurations (different versions, different operating systems, etc.) are used, they are synchronized with each other using the mechanisms included in the platform.

In addition to the Task Database, there are other "experimental" databases, but these are less functional and not as loaded.

Lessons learned
  • When dealing with such a large and massively used product, it's cheaper to write a test than it is to forego writing one. If there is an error in functionality and it goes unfixed, the cost to the end-users, partners, support, and even as much as an entire development department associated with reproduction, correction, and subsequent verification of errors will be much greater.
  • Even if writing automated tests is difficult, one can ask the developer to prepare a formal description of manual tests. After reading it, one can find gaps in the way of the developer testing his offspring, and therefore potential errors, as well.
  • Creating infrastructure for CI and tests is an expensive endeavor, both in terms of finances and time. This is especially the case when dealing with a mature project. So be sure to start as soon as possible!
And one more finding which does not directly relate to the articles but should be shared nevertheless: The best way to test a framework is to test the applications built on it. And the way we test a platform with applications, such as 1C:Accounting, will be the subject of a future article.
Be the first to know tips & tricks on business application development!

A confirmation e-mail has been sent to the e-mail address you provided .

Click the link in the e-mail to confirm and activate the subscription.