Operational Security in the Secure Software Development Lifecycle
By Doug Miller
Abstract
History has shown us that the majority of money spent on application development comes after the application has been rolled out. This is because of errors in code, upgrades, and more recently in programming history, security vulnerabilities in code. The development of the world of computer hacking from a few users learning how to find ways in and out of systems to a network of organized crime dedicated to exploiting application and system vulnerabilities for their own gain has spawned a case for development projects to learn how to protect themselves from these exploits. The cost of an application or system failing in today’s world cannot only be expressed in the dollar figure, but also in company reputation, and in some extreme cases human lives. The need for developing more secure code in applications is an absolute necessity and yet there are many organizations that forgo this critical step in development. The need for security consideration during the full lifecycle of development will not only protect the company itself but their clients as well. Not only do companies need to incorporate security into their development lifecycle, but they need to validate these security measures with testing and validation.
There are many developers and companies that do not know how to fully implement a security approach in their development and testing, and operational models. In this document we will go over such approaches and methodologies that developers can merge into their existing structure, or find ways to tweak their methodology in order to do so. Comparisons will be made between no security checks and a full lifecycle of secure development to show the differences and their impact. Financial and human capital comparisons will also be drawn out from these comparisons to show the impact in both arenas.
The comparison between incorporating security into the full lifecycle of a product and without security will be shown. While there are many developers who do not feel there is a need for more work when the product needs to be rolled out the drawback in not doing so will be clearly demonstrated with real world examples. The conclusions drawn up from this paper will lay out the need for a complete absorption of security into the full life cycle of the product to save money, protect your clients, and your business.
1. Introduction
The Software Development Lifecycle (SDLC) has been an ever evolving process that has many methods, styles, and flows. Each method or style has certain advantages and disadvantages and each development team or company decides which style to use based on many different factors such as timeline, staff knowledge, budget, other restrictions, and the list goes on and on. As many different iterations, steps, or processes that are created with each new Lifecycle, they could all benefit the consumer or the development company in many different ways, but without a proper security foundation and complete integration within the full lifecycle, both consumers and development organizations will fall prey to the pitfalls of insecure programming.
Secure application development was first introduced in 1991 from the US National Academies of Science Computer Science and Telecommunications Board. They published Computer at Risk, which asked the question “What makes secure software different?” By asking this question they essentially started the thought process of how to develop applications that could be most trusted and considered more secure than others. In their response to the question they began the framework for secure application development and methodologies that are still in use today.
In today’s world, there is a computer application for almost everything. There are computer chips in all modes of transportation and in our cell phones and even in our toilets. The next generation of appliances will connect to the internet for information. Eventually almost all forms of electronics will have some sort of connection to the internet or home location for updates and patches or just to let you know what the weather is. All of these tools, appliances, computers, and soon to be essential pieces of our lives will all be connected via some sort of network. With everything now connected to the world, the threat of real crippling attacks and/or simple exploits that could affect our quality of life are growing by day by day. The need for security in every source of code based products will be essential in order to protect not only our day to day quality of life, but eventually our lives.
The growing threat of lone cyber attackers and state sponsored threats is growing day by day. We have seen entire countries have their internet access shut down when Georgia had a Denial of Service attack (DoS) hit their network infrastructure in 2008. One method of a denial of service attack simply overloads a choke point in a network with packets. These packets require processing on the device that will slow down or eventually stop the critical network appliances from functioning correctly. We’ve also seen the power grid in Brazil be shut down by cyber attackers which affected entire cities all at once. These kinds of attacks are becoming more powerful, more organized, and most of them can be prevented.
Prevention of cyber attacks or even simple identity theft can be accomplished in many different ways. The main source of prevention is obviously good, secure code. In order to establish secure code you must fully incorporate security into your Software Development Life Cycle (SDLC). This will be the first phase in our tiered approach. The next phase in this approach will be a formal Certification process by the development company and the appliances or products they state their compatibility with. This will be a seal of approval to not only say that our product will run with this OS, these applications, and on this hardware but to say that we have tested the security on these combinations as well. The third tier will be outside of the normal spectrum of typical software security companies in that it will call for not only application testing for security vulnerabilities, but formal penetration testing to try and fully exploit the product or application by a third party. Now this will not be necessary for every application, however any application that will house sensitive information or have access into a network that needs to be protected should have some sort or formal penetration testing that will be certified by a third party. The final phase will be end user security awareness training. This can be as in depth or as complicated as need be due to the nature of the product, but there should be some sort of literature, or training for how to not only use the product, but use it securely.
2. Related Works
Initially security was not a focus in many products until after the product was rolled out and security vulnerability was discovered. In 2005, Microsoft developed what they called the Trustworthy Computing Security Development Lifecycle. In this work they describe their methodology for creating software that can withstand a malicious attack. They call their end product the Trustworthy Computing Security Development Lifecycle, or SDL. Their conclusions were that the cost of implementing more security controls up front in the development cycle will save much more than spending more on releasing patches and fixing errors down the road.
3. Tiered Security Solution
In a study by the President’s Information Technology Advisory Committee (PITAC) done in 2005 called Cyber Security: A Crisis of Prioritization, they defined that the major source of vulnerabilities in US networks and computing systems is our software. They did not define it as weak firewall or intrusion prevention systems but the software that can run on a web server or even a desktop. They compared infected software to a cancer that can slowly kill and spread silently, even if there are experts looking for it. And just like cancer, there are preventative and precautious measures that can be taken in order to mitigate this risk.
Security solutions are never a quick easy fix. It takes coordination, proper planning, and complete support from all levels of management. The first and one of the most important tiers in a complete security solution will be secure development.
3.1 Secure Development
Secure development in the broad sense simply means incorporating some sort of security check point in each phase of development. Without getting into each and every development style we will focus mainly on the general steps that are in almost all forms for a development lifecycle, Requirements, Design, Implementation, Verification, Release, and Support and Servicing. Taking a look at figure 3.1.1, this is a generic life cycle that most iterations of development will have.
Figure 3.1.1 Generic Microsoft development process (Microsoft Corp, 2005)
Microsoft defines the following as their overview for their Secure Development Lifecycle:
• Secure by Design: the software should be architected, designed, and implemented so as to protect itself and the information it processes, and to resist attacks.
• Secure by Default: in the real world, software will not achieve perfect security, so designers should assume that security flaws would be present. To minimize the harm that occurs when attackers target these remaining flaws, software's default state should promote security. For example, software should run with the least necessary privilege, and services and features that are not widely needed should be disabled by default or accessible only to a small population of users.
• Secure in Deployment: Tools and guidance should accompany software to help end users and/or administrators use it securely. Additionally, updates should be easy to deploy.
• Communications: software developers should be prepared for the discovery of product vulnerabilities and should communicate openly and responsibly with end users and/or administrators to help them take protective action (such as patching or deploying workarounds).
Their model can be shown in figure 3.1.2 which shows the security layers added to each basic development step.
Figure 3.1.2 Security Checks throughout generic software lifecycle (Microsoft Corp, 2005)
Development teams should get buy in from the beginning of a project by their managers and vice versa to make security a high priority for the development of a new or existing application. From the very beginning security should be involved during the requirements and have a security liaison to assist the developers as most developers are not fully trained in security design and implementation. Also during the requirements gathering teams should consider how the security controls will be integrated with the functionality and business requirements of the product.
The design phase will have a few extra steps added including also defining a security architecture and design guidelines along with the basic architecture and design. The design will include how to build off of a trusted computing base and layer the architecture off of that to include layers of privilege. This will effectively lower the attack surface if the product is correctly designed. Defining the attack surface will also be a priority in this phase. This will include identifying which features interact with users and how to keep all security controls to the principal of least privilege. The final piece added in the design step will be a threat modeling process. This will include going through each component and identifying any threats to those components in a structured methodology.
Implementation is when the developers and security team can look for any security holes that may have made it into the code. The types of holes you’re looking for in this step are simple structural errors and input errors. You can use tools like fuzzers, which will enter random and bad data into input fields to test for input validation and prevent errors like buffer overflows and SQL injection errors. Code can also be run through automated tools to test for errors beyond that of a normal compiler and should also be reviewed by trained developers to look for those errors.
The verification phase will include a final security push once the code is at a level of user beta testing. This final security push will include reviewing all code that interacts with all attack surfaces found in the design phase. All legacy code that has been brought over will also be reviewed as many errors are still found in legacy code that has been ported over for a new project. Also any changes to the legacy code will also be reviewed.
After the verification phase is completed and the beta testing has been completed, there will be the overall security review for the product. The deliverable of this review is a final security stance that explains the security from and end user stand point and the applications ability to withstand an attack after it goes live. This will include interviews with the product team and the security team member assigned in the requirements phase. The Final Security Review (“FSR”) will not identify all the vulnerabilities of the software but its main goal is to have an overall security picture for management to sign off on.
The final phase in the SDL is the support and servicing after the product is released. From here the development team will have to be on the lookout for new vulnerabilities that could affect their product and be able to come up with quick fixes in order to push it out to their client in time so they do not get compromised from this vulnerability. They also need to learn from these errors and try to prevent any other errors from rising.
Microsoft implemented these changes for the rollout of Windows Server 2003. They compared the amount of security releases or patches to Windows 2000 that did not use the SDL and the results were 62 patches in Windows 2000 (pre-SDL) to 24 patches in Windows Server 2003 (SDL).
Figure 3.1.3 Security Patches released in the first year of a SDL Operating System vs. a non SDL Operating System (Microsoft Corp, 2005)
3.2 Penetration Testing
The Security Development Lifecycle is a great step in developing more stable and trustworthy code but there are many more steps and ways to ensure that the code developed has gone through rigorous testing other than that of the development team. Penetration testing is used to find ways into an out of a program that developers and internal staff would miss during all the security evaluations. Even trained security developers do not have the full spectrum of skills and abilities that a full time application penetrator would have.
Penetration testing is the task of trying to break your way into an application by exploiting anything and everything possible to get any unauthorized access or reconnaissance for information. There are two forms of penetration testing. There is “black box” which is when the attackers have very little or no information on the product or application. The other side of this is “white box” testing where the attackers have substantial information about the product or network to find their way around. In this case, we would probably look more towards white box testing or even “grey box” testing which is when the attackers have just enough information to locate the application in question.
Benefits of the penetration testing will include attacks that you could expect in real life. Depending on the expertise of your penetration team, they will try and use the most up to date attacks that they have seen or learned about in the wild. This will also include vulnerabilities or exploits that haven’t been updated into any vulnerability software you may be running against your code. Until the technology improves itself, human penetration testing cannot be replaced as to the intuition a human being has to try and compromise an application.
The one caveat to this is keeping penetration testers in house or having a third party do the testing. Both have benefits but if possible, an outside resource will have the least biased approach and will start off knowing just as much as the typical attacker. For smaller organization is might be beneficial also to use third party resources as keeping on full time staff when you only roll out a few application releases a year might not hold the best cost to benefit ratio.
3.3 Development Third Party Auditing & Review
Developers can work on thousands of lines of code for countless hours. Sometimes they could be looking at an error every day and not even realize it. When developers review their own code or even do a peer review many mistakes still go unnoticed. When doing a peer review, code isn’t the only factor in the review. Certain developers may respect other developers more or even look up to them and will either skim over their code because they trust it, or because they don’t want to speak up against another. The only way to ensure a true audit of code would be to bring in an outsider or even a security resource on another project in the organization.
The benefits to having an outsider’s review of the code will include a fresh set of eyes on a project, expertise in security solutions for code, and prevention of developers partnering up to create a security hole for their own benefit down the road (also known as collusion). It is essential that during this audit it is not viewed as a witch hunt or trying to find the weak developers. Collusion becomes harder with larger teams, peer reviews, and security controls in place. But if enough people buy into a plan it is still certainly possible.
A key rule in secure development is to take responsibility for ones code, but this doesn’t mean that because you’ve made a mistake you must now take a pay cut. This simply means that you take pride in your code and if you make a mistake, you learn how to fix it and not do the same thing again. Once you learn how to fix it, you should cross train your fellow developers to look for this mistake or teach them how to avoid it.
The third party code review is one way to bring in an outsider to ensure secure code has been developed but another form of third party review would be an audit of the security controls, change management controls, and project management controls that should already be established for large projects. If these controls are not adhered to then there could have been requirement changes or design changes that were not agreed upon or management has no idea about.
3.4 Security Certification
All forms of software now are developed to be interoperable with different operating systems, applications, and tools. Each combination of applications and operating systems opens up an entire new attack surface and therefore needs to be fully tested to simulate the setup the typical end user is expected to have. Many applications are released and list the operating systems they are compatible with but do not list what configurations they are tested with during a security assessment.
Development companies need to certify certain application settings and environments so that their users have the confidence in the application they are using, in the environment that the product works best with, based on the recommendation of the development company. A basic example of this would look like table 3.4.1.
Figure 3.4.1: Basic Certified and Compatibility chart
Figure 3.4.1 shows a minimal chart with very few variables however application developers/companies need to give security recommendations with certified and tested configurations. The simple compatibility chart will not be enough in moving forward as even that doesn’t even recommend a setup that will run more stable or with fewer issues. This kind of information is very valuable to their clients as well as themselves. Some clients would simply like to be able to run a product on a configuration that is compatible with the software however many environments now require configurations to be as secure as possible. The work for this will not fall into the lap of the developers but instead a testing team. This process will combine the Quality and Assurance and penetration testing to ensure no only usability but a secure rollout as well. By introducing more applications and environments, more bugs and security holes will be found prior to release. If the changes cannot be implemented before release then they can already have these changes ready for the patches down the road.
3.5 Customer Certification and Accreditation
Much of the security work with applications is done on the development side and the development company can be held responsible for any security holes in their code or applications. The rest will be solely on that of the customer to ensure they’ve done everything on their side to meet security requirements that they’ve developed for their organization or requirements like PCI Security Standards Council, which handles payment transactions securely, Sarbanes Oxley (SOX) compliance which handles auditing and accountability, the Heath Insurance Portability Accountability Act (HIPAA), and many others.
Government compliance standards are a great start for a company that doesn’t have an established risk and strategy process but that is only the start. Every organization should have an established baseline of their security risks and from there include any new risks brought in by any new software or configuration to a public facing and even internal application. The same goes for any appliance or software that has been tested thoroughly by the software developers. Each environment is infinitely different and can welcome new issues and only the network architects, security team, and developers in each organization has any idea of what the best for their environment is.
New applications in every environment should have their configurations tested in a development lab before going into production with members from all portions of the IT department including hardware, security, network, strategy, architecture, change control, compliance, and project management teams. Every team should investigate any new risks or issues an application can bring up. This includes its interoperability with other existing applications and or processes. Then a risk register needs to be created by including input by all the teams previously listed.
Once the risk register is created there are three ways of handling the risk. The risks can be fixed with changes in configuration or code, they can be transferred to a third party via insurance or service level agreements (SLA’s), or the risk will be accepted and signed off by management. If the risk is accepted and signed off by management, they need to fully understand the risk at hand, the likelihood of the risk being exposed, and the damage that can be done to the organization not only in the dollar amount, but by opinion and speculation as well.
3.6 Customer Third Party Audit
Controls in an environment can be a lot of red tape and a hassle for teams simply trying to get things done. However, this red tape can be a life saver if something goes wrong, a project artifact needs to be found, or to provide auditable data. In order to ensure that all the controls are being properly handled and documented, a third party audit should be conducted as often as need be, but every year is a strongly recommended.
Third party audits of internal processes and interaction between project management teams, change control, and the technical side of the house can unearth many shortcomings. Some of these could be the methodology used, the issues from different teams, and any collusion or disregard for controls by employees. While most audits aren’t there to improve processes but instead show that processes are correctly followed, by doing this many times the organization under audit will find issues with their current methodology that wouldn’t have been discovered unless an outsider was brought in. This is not only used to protect the company’s assets but to ensure that their assets are being used in the best way possible.
3.7 Patch Management
No code is perfect. No development team finds all errors and most customers will not find any either. This is why patch management is essential in the security and health of one’s IT infrastructure. While it’s the responsibility of the developers to create patches and make these patches readily available to the customer, there’s no point if the customers do not have an enterprise patch management solution in place.
Having an enterprise patch management solution is not as simple as check the “automatic updates” button and calling it a day. Each patch not only can affect existing processes and applications, but in some cases they can actually introduce more vulnerability. Many organizations would think of patch management as an afterthought of the already arduous task of keeping IT solutions up and running however the easiest way into an organization is by using old vulnerabilities that companies haven’t patched. An enterprise solution can simply consist of a plan of how and when to update servers and applications.
Patches cannot always simply be pushed out to the enterprise or a few servers once they come out either. They need to be tested if need to in a lab if possible. In the past patches could take down entire systems, however most software organizations now roll out smooth and efficient patches that do not affect production systems in a negative way. Some organizations or applications you might accept or trust that the code will not hinder your production systems, but if they do, a back out plan will also have to be included. Minimizing the time between patch release and patch installation is the end goal. This will be all be included in a patch management plan which will include sign off from management on the risks associated with certain systems and the gap between patch release and patch installation.
3.8 Training and Awareness
The average user of a computer does not have formal security training. The easiest way to exploit a system or user is to use social engineering in order to get passwords, or have the user use the system in a way that will allow an outside or inside attacker to expose a certain threat. In order to combat these issue users will have to have some sort of user awareness training on how to use the system securely and properly. For some systems this could be a simple few instructions on how to gain access to the systems securely and log out securely to more robust training for critical systems that could pose a threat to the user or company or even your nation.
Developers typically do not have a lot of security knowledge without special training. Most computer science programs did not have any security focus or classes in their programs until recently. Therefore companies need to ensure their developers have a basic understanding in how to securely develop applications in order to protect their clients and companies reputation. The type of attacks and exploits grow with each day. Because of this training and awareness must be done at least annually to update developers and development teams on new and emerging threats. This training refresher can be outsourced or even put together by internal staff with the help of subject matter experts. For smaller budgets you can have one developer take training and cross-train fellow developers in the office.
4. Application
A robust and complete tiered security solution is not for everyone. Plain and simple not all organizations can afford it, or even have the personnel to do more than just write code and sell it. For the most part these solutions should be implemented into most organizations that want to protect applications developed for government agencies and that hold or react with sensitive information. One way to market this would be to state that your software has gone through rigorous testing, third party audits, and security controls to give your customers the most secure code possible. This will also develop a reputation for your software as more secure than competitors which in turn will give you more of a market edge as long as customers and your organization can afford it.
The small shop development team can’t afford to have third party audits come in to read every line of code. Most large shops can’t even do this especially with critical deadlines that need to be met due to contract restrictions or other variables. The point isn’t to kill yourself or your developers in order to include all these security controls. The main focus is simply to find ways to include these controls and solutions into your organization at every stage. There are many sources online to train developers with basic web vulnerabilities in code and input statements. There are also many open source and less costly solutions at each step.
For all organizations a Secure Development Lifecycle should be included. This can be done in a very cost effective manner and can be done almost completely with existing personnel in most cases. The only issues around this would be trying to implement this into an existing development project and if there is no formal software lifecycle developed already to build on. A security liaison for the project could be a contract employee for the life of the development cycle or worked out to be part time if funds did not permit. An example of how proper testing could have saved millions of dollars is when the Soviets bought control software from the Canadians for a gas pipeline. What they didn’t know is that the CIA planted a bug in the software that caused the pipeline to explode in 1982. The code was never properly tested in a test environment or reviewed before being placed into production.
Penetration testing should be done on all web applications and on any portions of code that interact with users. There are many applications that can do penetration tests that are automated if that is the route chosen but those can be expensive and can only use an existing set of attacks or exploits that are already known. A true penetration test should be built on top of existing exploits and trying to craft specific threats at a target that only a Penetration Expert could conduct. As good as appliances and threat applications are becoming they still cannot replace the real thing, yet. The ping of death in 1995 and 1996 was an error with creating malformed ping packets out which crashed many operating systems including Windows, Unix, and Macs. Simple security checks were not used when handling IP fragmentation which could’ve been discovered if an Ethical Hacker attempted this same instance on any one of these operating systems.
Third party reviews when the product is being developed can be costly but yet again third party can simply include developers from other parts of the organization or different development teams. It will have to be carefully managed if kept in house as it could cause rifts between divisions and teams but properly managed it could be very cost effective and easy to implement into any organization. The best method would obviously be an outsider to the organization but money permitting, a neutral team or employees internally (if properly trained) can do the same thing. This can also be done with proper security controls, change management, and project management regulations. In 1962, the Mariner I space probe diverted from its path during launch which caused it to be destroyed and cost NASA and taxpayers $18.5 billion because a developer didn’t translate a hand written formula into the proper code. A simple requirements check or code review would have discovered this error.
Development Certification could find more hurdles and obstacles to overcome as most organizations do not currently do this. In order to properly have a certified system when using different companies software, you will have to have an expert from that company also sign off on the certification process which would create partnerships or cost more money to get a security consultant from another company in to represent their product. While this process could be difficult to have sign off from other companies, simply suggesting to customers the settings and or certified setups will go a long way in not only customer satisfaction but also as a way to transfer risk to the customer if they run into security concerns with setups that are not the “Certified” settings but instead just the “Compatible” settings.
The life cycle of software normally falls completely on the development side but in this case we put responsibility on the users and customers themselves to ensure their security well being. For the customers they need to establish a certification and accreditation process on their side to ensure that they are implementing established controls to safeguard their assets and information. Many times applications are just placed into an environment with no standardization and no security checks with their interaction in new environments and this causes unknown risks and more exposure to vulnerabilities that will never be documented or mitigated. An example of how a software failure could’ve been prevented if this was in place was in 1985, when a radiation treatment device accidentally poisoned 6 people killing 3 of them when an upgrade to the existing machine was implemented on a radiation device. The upgrade was built off of an existing operation system that had flaws and would only be introduced if the technician typed in wrong information. It’s not enough to trust that all the requirements and checks were put in place by the software developers; proper application in your own environment to ensure the applications will work based on your requirements.
Auditing the customer’s practices and security controls is also a critical step in ensuring that they aren’t exposing themselves to more risk than they are willing to accept. This is a practice that almost always has to be done externally to ensure the validity of the audit and cannot be done until formal controls have been documented and signed off by management. You cannot audit anything that hasn’t been formally designed, documented, and signed off on so this is normally more difficult part of the process, especially for smaller organizations.
Patch Management beyond the point of developing them falls completely onto the customer. This is something can be cheap and easy to implement if the organization is smaller. Larger organizations that house specific applications might want to test all patches and updates this can end up being quite a task. This can be resource intensive and will have to be used with discretion based on the applications they could affect and client service level agreements if they exist. There are however many applications and appliances that can automate patch deployment and silent installation that makes it seamless to the users. Many exploits today especially popular spear phishing emails use .pdfs to exploit older versions of Adobe Acrobat as most users do not constantly update their Adobe reader. These spear phishing emails can get an attacker complete command and control over infected machines that can be used to mine information and do reconnaissance in environments.
Awareness and Training can be the biggest low cost win besides including security in the development lifecycle. Training your developers to write more secure code could be as easy as doing an online class or doing a few days for all your developers at once. This is essential in keeping up with any new threats and changes in the game and without it you’ll need many more safeguards to ensure you’re releasing any form of secure code. Developers also need to have training for security in their code, but also to become better developers. There are new techniques new types of code that were built with more security in mind also that they can take advantage of. The other side of training is for the customers. Most customers have no idea how to securely use applications or tools and this is also another cheap and easy win for security if you give refreshers at least once a year. Informational guides can be created by developers on how to securely use their tool just by creating recommendations from the developers and security team in one day.
Users in general must be given updates on what to look out for as well. Simply reading an email carefully can show a phishing email with a .pdf attachment. If users were simply notified earlier about the “I love you virus”, and not to open emails that say “I love you” they could’ve avoided the nearly 8.75 billion in losses the virus generated.
5. Conclusion
Security threats are changing every day. From new exploits to new forms or organized attacks and it will continue to evolve. All of these attacks and exploits all go after one common element. Some sort of weakness. Weakness in code defined as a buffer overflow or chance to perform a SQL Injection or simply emailing an unsuspecting user. There is no single security solution or way to solve any of these issues now or in the future. In order to be as secure as possible, as it is impossible to be fully impenetrable you need layers of security.
For secure application development, it doesn’t stop at simply the development of the product. It involves integrating capabilities between teams, organizational structure to develop security controls within an organization and to adhere to those. It takes help from third party resources to ensure your internal organization is effectively designing and developing secure code. Testing and validation from internal and external resources will be used in order to attain this secure code. Once the product is released it’s now up to the customers to ensure their security as best as possible on their end and the developers to issue patches and updates for any vulnerability they or others find.
The final piece is awareness and training for developers so that they can start developing secure code from the ground up. Most developers did not through security training in school or during their professional tenure so to ask them to suddenly know how to develop code securely is not feasible. The end users are the last piece in awareness. If they are simply coached on how to use applications properly and what to look for in the realm of security, many less breaches and compromised would occur.
Not one of these suggestions or solutions will work on their own. All of them need to be in place in one form or another to ensure that the code being developed will not only save your company and the customer’s money, but in some cases the lives our clients, and the security of our Nation.
6. References
Tipton, Harold F., & Henry, Kevin. (2006). Official (ISC)2 Guide to the CISSP CBK. Boca Raton:CRC Press, 2006
Tipton, Harold F., & Krause, Micki. (2006). Volume 3 of Information Security Management Handbook Series. Boca Raton:CRC Press, 2006
Microsoft Corp. The Trustworthy Computing Security Development Lifecycle.
[updated March 2005]. Available from http://msdn.microsoft.com/en-us/library/ms995349.aspx
Carnegie Melon University. (2009) “Secure Design Patterns.”
[updated October 2009]. Available from http://www.cert.org/archive/pdf/09tr010.pdf
Information Sssurance Technology Analysis Center (IATAC) & Data and Analysis Center for Software (DACS), (2007). “Software Security Assurance.”
Available from http://iac.dtic.mil/iatac/download/security.pdf
Howard, Michael. (2006) “8 Simple Rules for Developing More Secure Code”
Available from http://msdn.microsoft.com/en-us/magazine/cc163518.aspx
Hoffman, David E. (2004) “CIA Slipped bugs to Soviets”
Available from http://www.msnbc.msn.com/id/4394002