What’s all this again? Do I really need it? It’s just going to cost me time and money again?! Can’t everything just stay the way it is? Lots of people repeatedly ask themselves such questions whenever there is a new trend and companies consider whether to adopt this trend or not. In the following, we are going to explain what Continuous Testing (CT) is, how to exploit the underlying concept to best advantage and why the automation of tests with CT is far more than just a trend.
Put very simply, Continuous Testing is the integration of automated tests in the continuous software delivery process. Even multiple integrations of automated tests within a process, like after particular steps of a software delivery process, are not uncommon.
In principle, all kinds of tests are suitable for use: both functional tests, such as unit tests, integration tests, API tests, end-to-end tests or also layout tests along with non-functional tests, such as load and performance tests, security tests or usability tests. However, it usually makes no sense to carry out every test available after each step in the delivery process. In other words, there is no recipe or clearly formulated step-for-step instruction on how Continuous Testing is to be used or implemented. Continuous Testing is more a kind of pattern for Continuous Testing with short feedback loops. The basis here is that one always has to consider how automated tests can be used most effectively, measured in relation to the cost/benefit and according to their statement content for each test kind for its special environment, software, processes, infrastructure, etc. It is enormously important in this respect that all parties concerned consider and define in advance where and when what tests make sense. This can only be decided by everyone together, since different people have different roles as well as different views and priorities.
The advantage of automated tests is that they only have to be implemented once, usually run faster than tests conducted manually and always guarantee an exactly identical process. Continuous Testing in particular provides these advantages. Automated tests, once implemented, are carried out repeatedly after each change in the existing software delivery processes. At first glance, it might appear that processes would be unnecessarily delayed if tests run repeatedly and as a result delay the software from going live. In pure time terms, this may be true – carrying out tests naturally takes time. But it is worth accepting this time expenditure, since:
In short, Continuous Testing provides the entire team with more security and a good feeling for successful deliveries.
In terms of its basic idea, Continuous Testing is really a very simple approach. As is so often the case, the complexity lies in the detail and execution. To find out how this approach came about, it makes sense to make the development of software delivery processes clear.
„Previously“ (see Figure 1), software used to be developed according to the waterfall model. This means software was first planned and the requirements documented in writing. After this, development took place and the testing came right at the end for everything. Usually, there were only a few releases a year. Deployments in the test environment (stage) and live applications were frequently still associated with a great deal of manual effort and the tests that were carried out were to a great extent of a manual and frequently exploratory nature.
Checklists for tests were also typical, which were worked through repeatedly from point 1 to point n after each release. At that time, the subject of unit tests was already current, but had not yet been perfected; integration tests or API tests were more of the exception. However, the subject of end-to-end tests (user acceptance tests) was already in the air at the time and received further impetus in the web sphere through the launch of Selenium. However, these tests also proved to be very prone to errors at that time and comparatively slow and unresponsive. There were in part also burden or performance tests, which were carried out downstream. The motto here often was: „The user is our best tester and will provide us with feedback.“
Thanks to the growing popularity of agile approaches and methods, testing and the mindset concerning the software delivery process has also adapted and undergone further development.
Concepts such as „time to market“ and „potentially shippable“ resulted in deployments being automated for the most part in the form of CI/CD pipelines. As a result, the time cycle of releases was greatly reduced. However, it is often still the case that large deployments of large software components being rolled out.
There has also been a lot of optimisation in the field of testing. The importance of unit tests has continued to increase. Integration tests and API tests have increased in value and are more frequently automated. There still are many end-to-end tests. All of these tests have now been integrated in automatic deployment processes. End-to-end tests in part still run as downstream processes, since they continue to be more prone to errors and slower than unit or integration tests. Even non-functional tests, such as burden or performance tests, have in part already been integrated. (Figure 2)
Consequently, the first step towards Continuous Testing has already been taken. And how will the current model now become a Continuous Testing model? Actually, the answer is quite simple: Divide your software into services / microservices and guarantee that each service can be provided independently of all the others via a deployment process. If necessary and appropriate, each deployment step of each service is checked for correctness in this connection by suitable automated tests to be defined.
In theory, this sounds very simple indeed, but the details require a great deal of effort in order to be implemented. Each service should have its own CI/CD pipeline for this. To avoid redundancies in the production and maintenance of the pipelines and obtain a uniform pipeline structure, a pipeline template should be defined, which then uses each service and implements and if necessary shapes the services (for graphic clarification, see Figure 3).
In this way, automated tests, such as unit tests, integration and API tests, end-to-end tests, burden and performance tests or whatever kind of test can be carried out for each deployment of each defined service. It is always necessary in each case to define which tests are necessary or make sense and when.
Particularly with regard to the error-prone and very slow end-to-end tests, the following questions need to be clarified to create the correct dimension of tests for the deployment pipelines:
The subject matter at first appears to be very theoretical. But to explain the subject in more detail in practical terms, we will make use of an example to provide an introduction to Continuous Testing. Figure 4 provides a graphic illustration of the potential procedure in implementation.
Let us assume, a company has decided to deploy Continuous Testing. It cannot find the resources or skills needed for this within its own workforce and obtains the corresponding know-how from consultants, who support its own team.
The point of departure is characterised by web-based software, consisting of a front-end platform and back-end platform. Both back-end as well as front-end are based on a service, which is in turn based on REST. Other functional web services / microservices are provided, for control by the front-end or via an API (likewise via REST). On the one hand, there are manual tests that run via the web GUI of the application. On the other, the automated end-to-end tests are carried out locally at intervals on a developer machine. There are no interface tests and the conduct of existing automated tests in Jenkins is neither implemented nor planned.
The solution proposed includes the following features:
As a result, continuous, integrated and automated testing arises as added value in all pipelines with the quality of each pipeline and deployment object being continuously tested. Consequently, corresponding reporting and if necessary dependencies between deployment pipelines prevent known and discovered defects from being rolled out. In addition, intervention through manual testing activities and the associated effort is avoided and deployment in a theoretical 24/7 rhythm facilitated.
The specific distribution in independent (micro-) services including the creation of deployment pipelines and integration of tests in this process is certainly associated with time expenditure and cannot be realised from one day to the next. Continuous Testing is just a fraction of the overall continuous life cycle. But the result of successful implementation is worthwhile.
Continuous Testing is a very helpful concept to secure and even boost the quality of software and the delivery processes. In this respect, Continuous Testing is also undoubtedly more than just a trend, which could quickly establish itself as a constant or pattern in the continuous integration environment.
Continuous Testing as part of continuous deployment / delivery is and remains a fascinating subject. In future, companies will hardly be able to dispense with this subject in order to ensure high quality in the provision of software in a quantitative time cycle. In an era where everything has to be delivered as quickly as possible, quality should not and may not suffer as a result. And for this, Continuous Testing is exactly the right approach.
If a company operates CI/CD as a central service, SLAs are usually specified for this service – as for other company services. These generally include specifications for availability, performance and/or reaction times to incidents. SLAs originated from the IT environment and have been used for quite some time, especially in the area of hosting. There, the framework conditions for the provision of a service are sufficiently well understood and definable.
This does not apply in the same way to KPIs. Formulating them for an application service is generally not that easy. What’s more, in an agile or DevOps environment, KPIs also require constant feedback. This continuous feedback continuously modifies the initially selected KPIs. Over time and with the increasing maturity of the CI/CD service, established metrics take a back seat and other – usually „superior“, more complex – metrics increase in importance. This is normal and no reason to panic.
KPIs are then often used to control employee evaluations and/or bonuses. For a long time these specifications came from the upper hierarchical levels. Increasingly, DevOps teams now develop KPIs together, which are then subject to constant feedback so that they can change during operation.
The most important mantra for DevOps is ‚Measure‘, i.e. a large number of metrics are recorded and evaluated with the help of suitable tools.
For CI/CD as a service, this means recording a host of metrics from a variety of systems:
upstream systems such as SCM, LDAP, mail, HTTP proxy, ticketing
Infrastructures such as build servers, agents, test machines
Performance data of the application in production environments
Once all these measured values have been recorded, the work begins…
SLA definitions must be mapped to the measured data: does „available“ mean whether the system in question is available at all, or that it responds to a defined request within a defined maximum time, for example? This corresponds to the formulation of a „Definition of Done“ (DoD) from the agile environment.
It is also important to find an equivalent for KPIs in the collected data. An „Indicator“ is not an absolute measured value. An indicator is a prompt to take a closer look. If there are deviations (mostly on the time axis), you must always look at the reason and not simply accept the value.
In larger companies there is a tendency to derive assessments or variable salary components (bonuses) from KPIs directly. This is often insufficient, however. Many of the values from the overview below sound plausible at first, depending on your point of view, but on closer inspection and taking human nature into account they reveal some weaknesses.
Examples:
Lines of code per developer, per day – actually came from a highly paid consulting firm, and was fortunately rejected because it was obviously nonsense.
Cost distribution after use – if I want to establish a service, I should not receive payment for utilisation, but rather penalise non-use, and thus bill everyone for the service costs. Those who don’t use the service will have problems justifying this.
Build duration – the build duration is influenced by too many different factors, such as the number and thoroughness of tests, parallelisation within the build, availability of resources, etc.
Number of errors of a component in an iteration – not a good indicator because it depends too much on individuals and environmental conditions. May, however, be good for improving the process, e.g. commits / pushes only once all tests have been run locally.
Number of tests – the number of tests can increase easily without actually increasing the quality.
Test coverage – only suitable as a sole criterion under certain conditions. What is more important is that the value continuously improves. It is also important, however, to have a common definition of what is to be tested and how.
Ticket handling time – typically causes tickets to be closed mercilessly, without actually fixing the problem in question. A combination of measured values that take into account the steps within the workflow, including loops as well as other factors, is better.
Errors found in production – here an analysis as to why errors are not found until the system has gone live would be better
Disabled tests / number of tests per release – if abnormalities are found, this is a good time to have a look at the causes: Is the code currently being refactored, are new third-party libraries being used, which means some of the existing tests cannot be used without being adapted? A comparison with the previous release would be worthwhile here.
Architectural index / Maintainability index (e.g. from SonarQube) – a very good indicator of code quality, but not for other aspects of the application.
Number of known vulnerabilities per release, per application, broken down / weighted by severity. Realistically, you should only measure the improvement and not the absolute value.
Infrastructure utilisation – depending on available resources, it makes general sense to measure utilisation. However, the interpretation depends on many details, e.g. do I have to evaluate a static infrastructure with bare metal or VMs differently in this respect than a Kubernetes cluster.
The following figures show examples using a combination of Prometheus and Grafana. Utilisation of the ELK stack (Elasticsearch, Logstash, Kibana) is common in this context.
CI (Continuous Integration) refers to the translation and testing of software after each commit / push. The end result is usually a binary artefact that is stored in a repository for further use.
CD (Continuous Delivery / Deployment), as a superset of CI, tests the interaction of the generated artefacts with the goal of reaching production maturity. Continuous Delivery provides the corresponding binary artefacts for deployment and automatically sets them to productive – after successful testing.
SLAs (Service Level Agreements)have existed in one form or another for a very long time in the field of hosting They describe guaranteed characteristics of a service, on which the customer and the contractor have agreed prior to the performance of the service. They have similarities with contract terms or guaranteed technical properties.
KPIs (Key Performance Indicators) are – generally speaking – data relating to the achievement of goals. They are intended to provide information on how good or bad the measured values are in comparison with given targets or average values of comparable companies.
DevOps (Development + Operations) is a procedure from the agile environment that combines developers, testers and infrastructure operators in one team („You build it, you run it, you fix it“).
Questions?
We would be happy to call you back!
The graphic in figure 1 is well suited for an initial overview, but there are several things to question before KPIs can be derived from it:
Are the builds homogeneous, i.e. are all builds structurally the same or is there a colourful mix of micro services, J2EE and C#?
How was the delta at earlier points in time? What are the expectations on the part of the developers?
The graph in figure 2 is also well suited for an initial overview, but the results are not meaningful without knowledge of the context:
Is the procedure test-driven? Depending on the requirements, expectations can change as to which part of the builds should be successful.
What are the causes for failed builds? Infrastructure or program problems?
The builds per day as shown in figure 3 provide a good entry point for the daily controls of the service provider.
A sudden accumulation of failed builds should give rise to further investigation.
If the relationship between successful, unstable and failed builds remains more or less consistent, the service will essentially run smoothly.
The executor usage per hour in figure 4 provides an important assessment, but the context must also be taken into account here:
Do I have a limited number of executors of a certain type? I should measure this separately.
Do I have a limit regarding the maximum number of executors e.g. due to the infrastructure? I should measure this separately, too.
Generally, CI will result in a typical split between scheduled and push/commit-controlled builds. Here you should keep the number of overlaps as low as possible. Daily builds usually accumulate before lunch and before the end of the working day, so scheduled builds should take place in the early hours of the morning or late in the evening.
Queued builds, as shown in figure 5, are a sign that there are not enough executors available.
In such a case, it is a nightly build that builds many components. At night, this shouldn’t bother anyone, but during the day valuable resources would be blocked.
Queue peaks can also occur when all masters want to access the agent pool at the same time.
Another reason may be that there are not enough of a certain type of agent available.
Taking this into account, what is the actual goal of CI/CD? If you keep an eye on this question, the answer is usually to produce software in good quality and at high speed. Good quality includes, for example, maintainability, performance, exclusion of known critical security gaps, adherence to governance, risk and compliance standards.
For every one of these terms, all participants – whether operators, users, service sponsors or others – have to agree on a common view in advance and, in case of doubt, adapt this view during the course of the project.
In order to be able to develop software under this premise, the interaction of several tools is required:
Ticketing (e.g. Jira)
Build (e.g. Jenkins)
Code analysis and test evaluation (e.g. SonarQube)
Vulnerability analysis (e.g. Nexus Lifecycle, Nessus)
Unit tests, integration tests, user acceptance tests, performance tests, regression tests
Application performance monitoring
As part of a central CI/CD service, all of these systems provide measured values that can be used for KPIs and SLA monitoring.
Which of these measured values are actually relevant depends on many specific details. Usually it makes sense to start with a handful of simple values and then refine them further once you have seen the first evaluations with real data. It is also important to determine what you want to measure.
There is no single set of KPIs that fits any setup. In fact, specific KPIs have to be determined based on the customer and the tools and technology used. It is best to start with a few simple KPIs and modify them as experience increases to fit the purpose in question.
From a business perspective, there are two central KPIs:
1. „Idea to Production“, also „Time to Market“ – the time between the formulation of an idea as a ticket and the „go-live“ of the feature. Several factors are taken into account here:
How precisely the idea was recorded in the ticket (description, acceptance criteria)
„Size“ of the ticket (small modification / addition vs. change of architecture)
Prioritisation / workload of developers
Speed of the CD pipeline
2. „Hotfix deployment“, also „MTTR (Mean Time To Repair)/ MTTF (Mean Time To Fix)“ – the time between the (analysis of a problem and the) creation of a hotfix and the „go-live“. Several factors are also taken into account here:
Quality and scope of the previous analysis
Speed of the CD pipeline vs. completeness of the tests
Experience shows that it makes sense to start thinking about the hotfix deployment at an early stage (which tests do I not need? Special pipeline or special parameters of the „normal“ pipeline?), so that you don’t panic and make mistakes in the event of an emergency.
Other measured values that may be relevant for the operation of a central service:
Change success rate
The number of successful builds / deployments relative to the total number of builds / deployments. If the change success rate is too low, you should analyse where the error lies in the pipeline. If sources cannot be compiled, the is usually to blame. If the pipeline fails during integrative testing, better mocks may be needed to intercept such errors earlier on. If the pipeline fails at quality gates, the associated data may not be available in the developer’s IDE, or he may not know what to do with the existing information.
Deployments per month, per pipeline / application
Enables the comparability of different applications and technologies, provided that the framework conditions are reasonably similar.
„Lead time for change“
How long does it take for a commit to reach PROD (minimum, maximum, average)? Is related to 1. and 2.
„Batch size“
How many story points per deployment (minimum, maximum, average)? This is based on the individual cases and the Scrum velocity.
Code quality
Evaluation of test coverage, maintainability index, architectural index, etc., usually as a delta over time or against set standards. Derived indices, such as the maintainability or architectural index, are less susceptible to manipulation than simple metrics like test coverage. In any case, the measurement procedure must first be coordinated – if, for example, getter/setter are to be tested, what about the generated code?
Critical security bugs
Total number and/or number in new / changed code at a certain level. Corporate security may also be able to highlight certain individual errors here.
Performance deviations
Should always be treated with caution, but should definitely be observed. If there are unexpected deviations from previous measured values, the cause should always be determined.
Availability
What percentage of the service is available at the previously agreed times (24 x 7 vs. 9 x 5)? Are there any pre-defined maintenance periods on the infrastructure or service side?
Erroneous vs. successful calls
When does the service deliver errors? This can happen for example with session timeouts or deep links. With some applications, deep links don’t work well, and then you have to find other ways to provide the desired functionality.
Queue wait time (minimum, maximum, average)
How long does a job have to wait on average / maximum until it is dealt with? If waiting times occur, what is the cause? Are there generally too few agents, are there too few agents of a certain type, do all nightly builds start at the same time?
Builds/Deployments per day day/week
Actual, meaningful values depend on the application and the type of deployment. Here too, depending on the goal, the delta over time is the most interesting aspect; as a rule, the goal is to create more deployments per time unit.
Utilisation rate of the build agents
Setups with static machines are fundamentally different from dynamic infrastructures such as Docker / OpenShift / Kubernetes / AWS / Azure / etc. For static machines, I aim for a load that is as evenly distributed as possible. Working with a dynamically provided infrastructure is more about limiting or capping costs.
Process-related KPIs are indicators of how well processes are really utilised:
WTFs per day/week
How often does the team experience WTF ( „What the fuck“ ) moments? How often do things come up that nobody expected before?
Impediments per sprint
How many impediments come to light per sprint?
Impediment removal time
How long does it take to remove an impediment?
Non-availability of the product owner
A common problem when introducing agile working: project managers become product owners, but little changes apart from this. This automatically leads to the fact that they cannot do justice to the task of a product owner – neither from the point of view of the company nor from the point of view of the team(s).
SLAs should be agreed between the operator and the user prior to each commissioning of a service to ensure there are no misunderstandings later one due to completely different expectations.
KPIs are indicators that generally require a closer look when changed. They are only poorly suited 1:1 for measuring quality, usually the difference at a previous point in time is the better approach. They should be re-evaluated and revised regularly.
There are different types of KPIs, depending on the point of view; and each of these points of view has its justification. There is often the danger of getting too involved with the purely technical measured variables. However, experience has shown that it is the application and process KPIs that provide the most insight, even though their determination involves more effort. The technical KPIs, on the other hand, are more of a help when it comes to the diagnosis and removal of weaknesses.
Atlassian Jira probably has the highest market share among the project management tools used by companies and organisations that develop software. However, the use of the tool goes far beyond software development projects, as it is also used to manage projects that have nothing to do with software. The tool often represents the hub of daily work organisation and is therefore enormously important. Lke most project management tools, Atlassian Jira is optimised for individual projects. However, tasks of different teams often depend on one another.
The software organises the management of tasks into so-called projects, no matter whether this relates to real projects or just a summary of the work of a team. Like most project management tools, usage has been optimised for planning and managing individual projects.
In the everyday life of software development, however, projects or teams do not usually work in isolation, and this is something our ASERVO consultants experience time and again when visiting customers. Often, tasks of different teams depend on one another, or they even work together on a common product or result within the scope of a product portfolio. For example, one software component per team could be processed, which is used in a car, along with other software components, and also communicates with them. What’s more, there may be joint release dates that make multi-project or portfolio planning necessary. Also at the level of resource planning, when it comes to the number and competencies of available employees, multi-project planning is absolutely necessary in order to take their total workloads into account within the overall planning.
The information below provides details about the options that Atlassian Jira offers, both as stand-alone options, and with add-ons for multi-project management.
The reconciliation of tasks between projects on an oral basis, e.g. in regular cross-team meetings, is part of the recommended, natural way and procedure for everyday work. This tool also offers some interesting possibilities for the multi-project management of tasks as a stand-alone extension.
Multi-project or portfolio Jira boards
With Scrum boards (see Fig. 1 and Fig. 2) or Kanban boards from Jira Software, you have the possibility to follow and change processing on a graphical level. The number of tasks displayed on this board can be freely defined via a query. These can also relate to several projects, as shown in Fig. 1 or Fig. 2 (projects PJA and PJB). This allows you to easily obtain a multi-project or portfolio perspective of the processing of tasks.
Documentation on agile Jira boards
Fig. 1: Multi-project Scrum backlog with cross-team epic „enhanced entertainment“ and stories from different projects (projects PJA and PJB)
Fig. 2: Multi-project Scrum board with stories from different projects (projects PJA and PJB)
It is also possible to define cross-project evaluations – so-called filters – in order to obtain a list of tasks.
Besides the direct use of the list in the tool, as shown in Fig. 3, it can also be exported to a CSV file. Furthermore, the list can be used to define the tasks that are to be used in a report or other graphical evaluation.
Fig. 3: Filter with results list across 3 projects (projects POM, PJA, PJB)
Without additional add-ons, it is already possible to track the status of processing across multiple projects. However, it is practically impossible to plan tasks or resources in advance across projects. There are also some restrictions. This means that cross-project release versions are not possible. This requires additional tools, either outside of Jira or by using add-ons.
Without any direct integration, data exchange with external tools usually takes place via the export and import of CSV files. The desired number of tasks and fields can be defined using filters. But be careful! If you want to perform a re-import in order to update existing tasks, this is only possible via the CSV import function in the Administrator menu. It’s also worthwhile saving the CSV import settings (especially the mapping of the fields) in a file so they can be used again.
The simplest case of using an external tool would be further processing with MS Excel. However, the exchange of data does not take place immediately with every change, but only through an export. This means there is no guarantee that the tasks in the external tool are up-to-date at all times.
In our experience, this procedure is hardly practical, and data exchanges will take place less and less due to the effort involved, which means it will not be up-to-date or reliable. Exceptions here are simple evaluations via an export to MS Excel and then further processing. It should be noted that estimates exported to MS Excel, which Jira displays in days or hours are then shown in seconds.
Portfolio for Jira description
A Jira add-on called „Portfolio for Jira“ is available for Atlassian for multi-project or portfolio planning and progress tracking. Portfolio for Jira has been especially designed for agile approaches, and the current Jira data is always used automatically. Changes can easily be tried out in Portfolio itself until they are specifically written back in Jira.
The add-on offers its own cross-project and cross-team view, also with graphical representation. This overview can be done on different hierarchical levels as shown in Fig. 4 for stories and in Fig. 5 for epics. The graphical planning includes project releases such as „PJA V1“ and cross-project releases such as „Common Release 1“ (cross-project releases are only possible with Portfolio for Jira!). In the example, all Portfolio for Jira stories were automatically assigned to „Common Release 1“, because the calculation showed that they could all be processed by the fixed release date.
Fig. 4: Portfolio for Jira story plan for three projects (projects POM, PJA, PJB)
POM events are not shown in this story view because POM only has epics. There is however also an individual epic view:
Fig. 5: Portfolio for Jira epic plan for three projects (projects POM, PJA, PJB)
The features of Portfolio for Jira can be used above all on a cross-project level for multi-project or portfolio management, but they also be used by project managers and requirements managers when working on or under projects, as shown below.
Cross-project features
Fig. 6: Composition of a team according to skills and capacities
Project-specific features
In practice, we see the strengths of the add-on in the extensive expansion of multi-project or portfolio planning (including resource planning) possibilities offered here, and the direct integration and optimal data synchronisation with Jira. From our point of view, the graphical representation of the planning and handling of comprehensive releases also deserves some extra plus points.
On the other hand, we view the insufficiently developed options, adjustments and own design possibilities as a limitation. For example, it is not possible to insert your own calculations, and graphics can only be influenced to a limited extent. Although there is an official API for own extensions, Portfolio for Jira cannot (yet) keep up with the market-leading tools for planning a portfolio.
Manufacturer: Teamlead
Adds some additional features to Portfolio for Jira
Manufacturer: ALM Works
Own hierarchical structuring of Jira processes, multi-project view
Manufacturer: SoftwarePlant
Gantt chart planning with dependencies and some portfolio functions. Similar to a traditional project management tool, but also takes agile procedures into consideration.
Manufacturer: SoftwarePlant
Project portfolio management, resource management, risk management with Gantt charts.
Agilefant Business Execution for Jira
Manufacturer: Agilefant Ltd
Links high level initiatives with Jira epics and tasks. Allows you to plan and track progress from a business perspective.
As an independent version, Atlassian Jira already offers initial possibilities for multi-project or portfolio management, but it is by no means sufficient on its own. The required extensions are provided via add-ons like Portfolio for Jira, in particular, which work optimally with Jira processes, teams and releases. However, in most cases they do not offer the functions provided by independent tools for multi-project or portfolio management. In individual cases, the decisive factor is whether integration or the range of functions is most important.
When it comes to using open source components to manufacture modern software, the bottom line is this – precise intelligence is critical. Tools that lack precision cannot scale to the needs of modern software development. Inaccurate and/or incomplete data will leave organizations to deal with vulnerabilities, licensing, and other quality issues that lead directly to higher costs and reduced innovation.
Learn about Advanced Binary Fingerprinting and why precision matters in data intelligence in Sonatype’s white paper: „Enforce Open Source policies with confidence“.
Find out more about the applications you use. With a free Application Health Check, Sonatype offers the option of having your stock of open source and proprietary components recorded in a parts list. Why don’t you scan one of your applications – the results might surprise you.
6 reasons for the Application Health Check:
ASERVO Software GmbH
Konrad-Zuse-Platz 8
81829 München Germany
Tel: +49 89 7167182 – 40
Fax: +49 89 7167182 – 55
E-Mail: Kontakt@aservo.com
Copyright © 2023. ASERVO SOFTWARE GMBH