All posts must be (6) substantive responses with a minimum of 150 words each for Responses 1, 2, 3, 4, 5 and 6. Ensure you list and break down each response in a word document, along with its reference. Response provided should further discuss the subject or provide more insight. To further understand the response, below is the discussion post that’s discusses the responses. 100% original work and not plagiarized. Must meet deadline.
For this weeks discussion I have chosen to discuss Nagios core which is an open source system and network monitoring application. Being able to monitor your network is extremely important and essential to your day to day operation, so it is best that one is able to do so. Nagios monitors hosts and services of your choosing along with providing alerts for when things are for better and for worst. The application was originally designed to run under linux but turned out to be able to function under multiple entities. Some of the features included within Nagios network monitoring application is the ability to monitor networking services such as SMTP, HTTP, PING, and others. You could also monitor host resources such as processor load and disk usage. One could expect to receive notifications through email, paper, or other methods in order to deliver both issues and resolved solutions to the network.
About nagios core · nagios core documentation. (n.d.). Retrieved December 16, 2021, from https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/about.html
I chose to research Solarwinds. Solarwinds is one of the more popular network monitoring tools on the market. It is used by major corporations such as Chevron, NASDAQ, and the US Military, specifically supporting the Warfighter Information Network – Tactical (WIN – T). During my later years in the military, we transitioned to WIN – T solutions, a rapidly deployable tactical network, to support combat operations. Solarwinds was one of the many monitoring tools entrusted for use to manage, monitor, and troubleshoot network performance. There are many modules available for purchase but some of the more prominent are the network performance monitor which provides real time availability and network health statuses, a patch manager which can locate vulnerabilities and deploy multiple third part patches from a centralized application, and the security event manager which can detect and respond to threats and suspicious activity, provide reports on compliance across the network, and build and analyze logs from countless other connectors from Anti-Virus applications to web servers and everything in between.
The system/application domain that is responsible for the software that runs on the computers of an organization. Common best practices exist in information security, just as they do in other fields, and they provide a starting point for organizations looking to secure their systems. The notion of layered defense should be used by organizations in order to achieve the optimal balance between security and usability. Layered defense should not be overcomplicated. For the system/application domain, according to our text, security measures may generally be divided into three categories: isolating the data, restricting access to the data, and safeguarding the data from loss via redundancy (Wand, D. M. C.).
The computer room / data center of an organization is where the actual hardware that is used to execute the organization’s essential applications is kept. The computer room / data center may benefit from the implementation of all three types of system/application security. When it comes to data isolation and access control, physical access control is the most effective solution. A physical access control system is often implemented via the use of door locks, which may be either physical keys, electronic passcodes, biometrics, or even a mix of all three types. Keeping the data safe from loss is performed by maintaining an environmentally controlled computer room / data center and using a fire suppression system.
It is also recommended that organizations develop a Disaster Recovery Plan for the system/application domain (DRP). A disaster recovery plan (DRP) is a contingency plan that outlines the procedures that will be taken to restore an organization’s infrastructure after an incident such as a natural catastrophe or a flood (Application Domain). From a cooperative arrangement with another company to temporarily take over important business tasks to a Hot site, which is a full replica of the firm’s infrastructure situated in another place, there are many stages of recovery that may be implemented.
Application Domain – an overview | ScienceDirect Topics. (2020). Science Direct. https://www.sciencedirect.com/topics/computer-science/application-domain
Wand, D. M. C. (2021, February 15). Securing the Seven Domains of IT Infrastructure. Cyberfore. https://www.cyberfore.com/post/securing-the-seven-domains-of-it-infrastructure#:%7E:text=The%20System%2FApplication%20Domain%20includes,as%20end%2Duser%20software).
Security within the systems/application domain is concerned with continuing the availability of the services within the app while protecting the information stored there, during information transmission, and active use of resources. Best practices when working to ensure this security within the system/application domain include isolating data. Data isolation techniques involve the use of firewalls, network segmentation and switches. Another suggested best practice is the limiting of access to sensitive data within the domain. This works to ensure there is no unauthorized access and disclosure of the resources being protected. Limiting access is often accomplished with the implementation of access controls which limit the devices and users which are able to access information. Access controls are often implemented through the principle of least privilege, or the idea that an individual is given only the amount of access they need to complete their job function. Access controls can be managed at both the user and group level within the domain. A third best practices in the systems/application domain is the protection of information from loss through redundancy. What this means is creating backups of the resources being stored, and finding different methods of storing them such as external hard drives, off site locations, and cloud storage, doing this prevents a loss of data in the event something happens to the system or site (Weiss, Solomon, 2015). Hope you are all having a great week.
Weiss M., Solomon M., (2015) Auditing IT Infrastructures for Compliance. Jones & Bartlett.
Deductive forensics is used to piece together information that is available to discern information about an event that may happen from an event that has already happened. For instance, a police investigation will use evidence from cases that have already happened to create a profile on a suspect. This is to attempt to deduce where or whom that person may attack again. In the same way deductive forensics can be used by AI and in machine learning. With these factors the time it takes for a human to see patterns in information or data is null because the human factor is irrelevant and unneeded. Machine learning will make computers able to see patterns in message and data traffic from specific IP’s or locations, it will see patterns in form queries where there is an attempted SQL/buffer overflow attack in progress, it will be able to anticipate and act accordingly when a DDoS attack is occurring and be able to Sinkhole the traffic. AI and Machine learning will also be able to track an attack if it recognizes an attack is happening. Often there are insider threat attacks or stolen legitimate credentials that are indistinguishable from an authorized user because it is an authorized users’ credentials. However, machine learning may be able to log and find patterns in their work that would indicate the credentials are being used from an IP that is incompatible with the regular IP and flag those occurrences as abnormal. Once machine learning is programed with parameters to search for it will be able to find a multitude of evidence for crimes being committed, about to be, or that already have been. It’s simply a matter of not giving too much authorization for the searches to maintain privacy laws and jurisdictions.
IBM Cloud Education. (n.d.). What is machine learning? IBM. Retrieved December 15, 2021, from https://www.ibm.com/cloud/learn/machine-learning
Five essential capabilities: Automated machine learning – ironside – business analytics. Data Science. Information Management. Ironside. (2019, January 28). Retrieved December 15, 2021, from https://www.ironsidegroup.com/2018/06/06/five-essential-capabilities-automated-machine-learning/
I hope everyone is having a great week, lets jump right into the discussion forum. Deductive forensic (DF) investigation provides a method for recovering missing, intentionally erased, or obscured data from a suspect’s computer. Nonetheless, present human capability and government services are insufficient to pursue cyber offenders. DF demands are particularly prevalent at the moment. The DF investigation processes aid in gathering useful information from a compromised machine. Companies nowadays place a great value on digital technology and the Internet. Equally important is the collection of critical data from this equipment. To support or disprove the investigator’s logic in the incident, digital data from the gadget must be collected. Computational intelligence techniques have been extensively researched in a variety of fields, including forensics. Forensic science is the study of analytical procedures that address issues of relevance in medical, communication, legal, genetic research, security, and other fields. However, forensic investigation is often undertaken through laboratory procedures, which are both expensive and time consuming. Instead of programming, machine learning should be employed as a framework in digital forensics to gain from experience and observations. As a result, if the machine is actively learning and making decisions based on information in digital forensics, it will be more useful to investigators. The digital forensic organizations are working hard to develop new ways and concepts for removing digital evidence that is attempting to breach the security of cloud providers. As it is not a simple process, and it always takes time to achieve the best results. So researchers in the field of digital forensics always go for live forensics, which means they examine a cloud environment with the assistance of their tools and software every time.
Brecht, D. (2018, January 26) Computer Crime Investigation Using Forensic Tools and Technology. Retrieved From: https://resources.infosecinstitute.com/topic/computer-crime-investigation-using-forensic-tools-and-technology/
Deductive and Inductive Reasoning. (2006, October 18). Retrieved from: https://forensicblog.org/deductive-and-inductive-reasoning/