Commercial off-the-shelf software (COTS) is a staple in today’s modern software development world. Not only does it extend solution features, but it gets them to users faster. However, it also introduces risks that can adversely impact the proper operation of your system or product and the information processed, stored, or transmitted by it - risks like theft, counterfeits, tampering, malware insertion, poor development practices, and defective components.  

These risks need to be mitigated to minimize damage potential. It’s essential that any 3rd-party software supply chain plan includes the system’s risk tolerance. And a critical piece to this plan is the ability to assess COTS and other 3rd-party components during the acquisition and provisioning process.

Use the acquisition process during the early stages of the development lifecycle to protect the entire system. Some simple tactics include obscuring component use (to avoid unnecessary information disclosure), requiring tamper-evident packaging or attestation, and using a trusted distribution channel.

I particularly like requiring tamper-evident packaging or attestation. Software suppliers can help mitigate risk by implementing security and privacy controls, providing transparency into their processes, and providing additional visibility and vetting of their downstream suppliers. Software composition analysis (SCA) can get you an effective build-of-materials for your 3rd-party software; however, some vendors don’t allow this, and other software comes packaged as a compiled binary. This is when you need to lean on your supplier(s) to deliver the visibility and security assessment data you require as part of your risk management plan.

New call-to-action

An internal or external expert can conduct assessments before accepting, modifying, or updating system components. These assessments will identify vulnerabilities, signs of tampering, and evidence of anything “out of bounds” with your supply chain controls and compliance requirements. Most of the following assessment techniques will yield some valuable insight. Choose one, or more, but do so wisely, so you don’t spend all your time chasing assessment data:

  • Validation of hashes and signatures
  • Architecture/Design review
  • Static analysis (SAST)
  • Dynamic analysis (DAST)
  • Fuzz testing
  • Stress/ Penetration testing
  • Review of 3rd-party SDLC process
  • Review of 3rd-party software documentation

Understanding and minimizing your attack surface is the ultimate objective. This critical step of threat modeling allows you to “shift left” in your software or system development lifecycle and choose safer components for your solution. My personal preference would be the Review of 3rd-party SDLC process technique. Because software changes so rapidly these days, results from vulnerability scans get dated pretty quickly. However, if the process by which the provider designs and builds software is sound (and well documented), that often represents a commitment to security, and chances are the software is fairly secure.

And if it isn’t, a well documented, secure SDLC helps with the nasty business of data breach notification, compliance adherence, etc. should the need ever arise.