There is not a distinctive standard explanation of what exactly what the Internet of Things (IoT) is. Most professionals define the term as the connection of diverse devices that can provides or request a service over the Internet to enable human-to-thing, thing-to-thing, and thing-to-things for the transmission of data. There are many ways that IoT applications are improving everyday life. Vehicles are now being equipped with small IoT devices that enable vehicles to downloading roadmaps with updated traffic information and protection against auto theft. Even are buildings are having IoT device installed with sensors that allow users to remotely control a building’s energy consumption to different systems such as lights and air conditioners based on preferences. Even many household items are sold being sold with their own embedded processing unit which enable product to have IoT abilities.
The concept of what IoT as systems is composed of has caught the attention of many people from academic and industry. The IoT reference model has been used to explain the each of the different sections within an IoT system ranges from three to seven different levels. The first reference model for IoT system consisted of three levels and described IoT as a system of Wireless Sensor Networks (WSNs).
The second model proposed model has five-levels and reduces the complexity during interactions between different sections of the model, resulting in simpler applications with well-defined components. The current model created by CISCO in 2014 extends the previous models into seven different levels, where the flow of data has a dominate direction depending on the type of application. The first three levels of the model are grouped into the Edge-side layer.
Level 1 consists of edge devices computing nodes such as: smart controllers, sensors, and RDIF readers.
Level 2 consists of the many communication components that enable the transmission of data or commands.
Level 3 is the edge ( or fog) computing level. This is where simple data processing starts to reduce the computation load in the upper levels, producing a faster response.
The next three layers are grouped into the server or Cloud-side layer.
Level 4 reduces the amount of data in motion to resting state by filtering and selective storing network packets to database tables.
Level 5 the information becomes abstract to provide the ability to render and store data allowing more efficient and simpler data processing.
Level 6 the information can be interpreted in application for marketing, academic, and industrial needs.
The final group contains only level 7, this is where users interact with the data using application from the IoT node data.
The motivations of potential attackers who launch attacks against IoT devices and systems might include the stealing of sensitive data or compromising IoT component. The vulnerabilities for IoT devices at the first level start with hardware Trojans. These are a major concert for IoT integrated circuits since an attacker can use the circuit to exploit a nodes functionality to get access to data or software running on integrated circuits. This might happen one of two ways:
Externally activated trojan by an antenna or sensor
Internal-activated trojan once a certain condition is met within the integrated circuits
Non-network side-channel attacks in edge node may reveal critical information under normal operation even when a node is not current using any wireless communication to send or receive data. Lastly, a Denial of service (DoS) attacks can occur against IoT devices and the three main types of attack are: battery draining, sleep deprivation, and outage attacks.
In a batter draining DoS attack, an attacker will send many packets to a node forcing it to run varies system checks repeatedly. Since nodes tended to be very small, carrying small batteries with limited energy capacity.
In a Sleep deprivation attack, an attacker will attempt to send a chain of request to a node that will appear to be legitimate. Since most IoT nodes are battery-powered node with a limited energy capacity.
When a possible outage attacks occurs, an edge node stops performing at normal operating. However, this may be as a result of an unintended error or a system issue.
Implementing RFID tags in IoT device at the edge node level requires all such RFID tags to provide a unique identifier that any nearby RFID reader can read. The tag that is attached to a product or an individual making creating tracking information. Certain types of tags can carry information about the product or individual it is attached to making a node easily inventoried for a third party. The electronic product code (EPC) tags contains two custom fields that create privacy concerns for users: the manufacturer and product code.
The scope of attacks at the communication level of the reference model an attacker might consider for reconnaissance is network eavesdropping or packet sniffing. This occurs when an attacker deliberately listening to private conversion over system communication links. This can prove an attacker with invaluable information when the data is unencrypted or sent in plaintext. Data contained within a network packet might contain the following:
Usernames & passwords
Shared network passwords
A side-channel attack is not easy to implement but are powerful attack against encryption algorithms. This type of attack can be launched from the both edge node and communication levels. However, when a side-channel attack is launched from the communication level are not easily defended against since this method is non-invasive and undetectable. Another possible attack at this level is the injection of fraudulent packets into communication links by inserting new packets in networking or the capturing networking packets then manipulation of the data containing with.
There are new and emerging challenges to securing IoT systems such as dramatic increase in the number of weak links and unexpected uses of data. The dramatic increase in the number are as a result of the special characteristics of devices and cost factors by device manufactures such as compact battery-powered devices with limited storage and computation resources, many market devices are not able to support secure cryptographic protocols. Lastly, the unexpected uses of data from environment or user-related data collection by Internet sensors from present computing enabled by IoT technologies has led to the unwelcome influence of Internet-connected sensors in everyday living around create privacy concerns with users.
As more developers push new IoT devices and services to the Internet this will lead to the discovery of new IoT vulnerabilities and attacks against users and systems. Most system are designed to a specific application or service and testing the security of the system might be complex and time consuming but is necessary as the number of new devices deployed to the Internet by manufactures increases each week. Some security threats might not be as widely recognized other are, but new threats to IoT devices and application should be made addresses both by security professionals and developers to minizine the scope of possible risk to users and devices.
MOSENIA, A., & JHA, N. (2017). A Comprehensive Study of Security of Internet-of-Things. IEEE Transactions on Emerging Topics in Computing, 586-602.
The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines equipped with unique identifiers (UIDs) which have the ability to collecting, sharing, and analyzing data over a networking without requiring human-to-human or human-to-computer interactions. It is an interconnection of heterogeneous entities where the term “entity” refers to a human, sensor, or anything that may request or provide a service. As more wireless networks come online, the total number of IoT devices around the world will only expand the scope of IoT devices and applications. Vendors are now leveraging IPv6 address schemes with high-speed Internet connections to improve the design and performance of IoT devices, thus creating an increased growth and demand for new IoT products.
IoT is playing a key role in transforming everyday life with a greater connectivity and functionality generating data faster than most applications can process and filter. By combining these connected devices with automated systems, it is possible to gather information, analyze it, and create an action or event to help someone with a task or learn from a process. However, many IoT devices have several operational limitations on the computational power available to them. These constraints often make them unable to implement basic security measures and have a low price and consumer focus of many devices makes a robust security patching system uncommon.
The scope of IoT applications has opened the door for many new business opportunities and revenue streams. Many businesses who implement IoT services to have a better view of operational expenses creating a better marketing insight based on consumer behavior and product placement. This can lead to a reduction in the total time it many take for a product to be available to a consumer. IoT also offers businesses just-in-time training for employees to improve labor efficiency to increasing organizational productivity. Logistics and supply chains are improved with IoT by creating a unique identifier for individual items from supply chains to make intelligent choices on how to deliver goods and services more efficiently to consumers. IoT helps manufacturing companies to measure a product’s performance, diagnose errors, and improve a product’s quality, performance, and support.
II. IoT reference model
The initial proposed IoT reference model consists of three levels and represents IoT as an extended version of wireless sensor networks (WSN).
In 2014, a new IoT Reference Model was created by Cisco and consists of the following seven levels and has data generally flowing in a bidirectional manner.
Collaboration and processes (People & Business Processes)
Application (Reporting, Analytics, Control)
Data Abstraction (Aggregation & Access)
Data Accumulation (Storage)
Edge Computing (Data Analysis & Transformation)
Connectivity (Communication & Processing Units)
Physical Devices & Controllers (Devices)
Level 1 – This level is concerned with physical devices at the edge-side, this contains the physical devices such as: smart controllers, sensors, and RFID reader. Data confidentiality and integrity is considered from here upward.
Level 2 – This level contains all communication and processing units that enable the transmission of data or commands by using routing and switching protocols. Communication happens between IoT devices in the first level and components in the second level, including communication across data networks.
Level 3 – Edge Computing, is simple data processing that is initiated and is essential for reducing computation loads in the higher levels as well as providing a fast response to events. Learning algorithms are implemented at this level.
Level 4 – Data Accumulation, data is combined from multiple sources to enable the conversion of data in motion to data at rest. At this level, data is converted into a format from network packets to database tables then is determined if it’s of interest to higher levels through filtering and selective storing for future analysis or shared with high levels computing servers.
Level 5 – Data Abstraction, this provides the opportunity to read and store data such that further processing becomes simpler or more efficient. Services at this level may include data normalization/denormalization then indexing and consolidating data into one place with access to multiple data stores.
Level 6 – Applications information interpretation and software cooperates with data accumulation and data abstraction levels.
Level 7 – This level involves users and business processes using IoT applications and their analytical data to make informed choices.
III. Fog and Edge Computing in IoT
IoT vendors are implementing edge and fog computing technology to providing enhanced data analysis and management to increase the scope of possible IoT applications. In computer networking, the control plane is the part of the router architecture that is concerned with the network topology or the information generated in the routing table that defines what to do with incoming packets. The data plane is the part of the software the processes the data requests. Fog computing is a standard that defines how edge computing should work and it facilitates the operation of computation, storage, and networking services between IoT devices and cloud computing centers. This enables computing services to reside at the edge of the network as opposed to servers in a data center. Whereas, the control plane is the part of the software that configures, and shuts downs the data plane. In Fog computing, there is only one centralized computing device responsible for processing data from different endpoints in the networks. This style of architecture uses edge devices to carry data out from substantial amount of computation storage and communication locally then sending it over the Internet backbone. Fog computing can be perceived both in large cloud systems and big data structures, referring to the growing difficulties in accessing information objectively. This brings data closer to the user as compared to storing data far from the end point in data centers, providing location awareness, low latency, and improves the overall quality of service.
Edge computing is located at the edge of the network, this how IoT data is collected and analyzed directly by controllers or sensors then transmitted to a nearby computing device for analysis. This brings processing closer to the data source and does not need to be sent to a remote cloud or other centralized system for processing. This eliminates the distance and time it takes to send data to a centralized source, which improves the speed and performance of data transport, as well as devices and applications on the edge. Instead of completely depending on a cluster of clouds for computing and data storage, edge computing can prove intelligent services by leveraging local computing and local edge devices. Edge computing applications can pre-process, filter, score, and aggregate data.
Pushes communication capabilities, processing power, and intelligence data directly into devices; programmable automation controllers
Pushes intelligence data to the local area network and processes data either in IoT gateway or a fog node.
IV. The Vulnerabilities of IoT
Security is a significant challenge for company to adopt and deploy IoT innovations. There is not much motivation for vendors to change with little or no consequences for selling insecure devices since device can be manufactured very cheaply and are not maintained with regular patches and updates by vendors. An example of a major security concert for integrated circuits is hardware trojans. A malicious modification of an integrated circuit (IC) enables an attacker to use the circuit or exploit its functionality obtain access to data or software running on the integrated circuitry.
Externally Activated (Antenna or sensor)
Internally Activated (Given Condition; Logic)
IoT systems are higher security risk for several other reasons: insecure network interface or services, insufficient authentication/authorization. These systems might include data or services that were not designed to be connected to the global Internet. These systems may not have a well-defined perimeter and are continuously changing due to device and user mobility.
IoT systems are highly diverse in character with respect to communication medium and protocols, platforms and devices. As a result, IoT systems, or portions, could be physically unprotected and/or controlled by different parties. Also, IoT devices could be autonomous entities that control other IoT devices. Routing Attacks against IoT network will affect how packets are routed in by being spoofed, redirected or misdirected to another network. An attacker can inject fraudulent packets into communication links using three different methods: insertion, manipulation, or replication.
There are several communication vulnerabilities in IoT devices sometimes as a result of a lack of transport encryption/integrity verification this may cause packet being intentionally listening to private conversions over the communication lines by a third party. As a result, there are several privacy concerns with IoT devices such as: Insufficient security configurability, insecure software/firmware, and poor physical security.
DOS Attacks is standard attacks used against IoT devices that jams the transmission of radio signals by either continuous jamming by blocking all transmissions or intermittent jamming by reducing the performance of systems. There are three well know types of DOS attacks against edge computing nodes: battery draining, sleep deprivation, and outage attacks. When a DoS battery draining attack happens nodes usually must carry small batteries with very limited energy capacity. In a Sleep Deprivation attack, the victim is a battery powered node with a limited energy capacity the attacker attempts to send an undesired set of requests that seem to be legitimate. Lastly an outage attacks happens when an edge node outage occurs when an edge device steps performing its normal operations
V. Botnets and Internet of Things
A botnet is a robot network of compromised machines, or bots, that run malicious, or bots, that run malicious software under the command-and-control of a bot master. Bots can automatically scan entire network ranges and propagate themselves using known vulnerabilities and weak passwords an on other machines. Once a machine is compromised, a small program is installed for future activation by the bot master, who at a certain time can instruct the bots in the network to execute actions. A network of infected machines or bots (zombies) that has a command-and-control infrastructure and is used for various malicious activities. Botnet architecture has evolved over time in an effect to evade detection and disruption. Bot programs are constructed as clients with communicate via existing servers. This allows the bot master to perform all control form a remote location, which obfuscates their traffic in a Client-Server or Peer-to-Peer network.
Once the software is downloaded, the botnet will now contact its matter computer and let it know that everything is ready to go. An individual botnet device can be simultaneously compromised type of attack and often at the same time. Servers may choose to outline rules on the behavior of internet bots. This informs the web robot about which areas of the website should not be processed or scanned.
The text file, robots.txt, is normally place on the root of a webserver to govern a bot’s behavior on that server, then it can be used by search engines to categorize websites. Robots that choose to follow the instructions try to fetch this file and read the instructions before fetching any other file from the website. If this file does not exist, web robots assume that the website owner is not wishing to place any limitations on crawling the entire site.
Botnets can be used to perform Distributed Denial of Service (DDoS) attacks, steal data, send spam, allow an attacker to access the devise and its connection, or mine cryptocurrency. A Distributed Denial of Service (DDoS) attack is a malicious attempt to make a server or a network resource unavailable to users. It is achieved by saturating a service, which results in its temporary suspension or interruption. The goal of the attacks is to overwhelm a target application with an extreme number of requests per second (RPS) with high CPU and memory usage. A single machine used to either target a software vulnerability of flood a targeted resource with packets, requests, and queries. Application layer type DDoS attacks occur by Http floods, slow attacks, or Zero-day assaults.
Network layer DDoS Attacks
Gigabits per second (GPS)
Packets per second (PPS)
Consume the targets upstream bandwidth
VI. The Mirai Botnet
On October 12th, 2016, a massive DDoS attack left much of the internet inaccessible on the United States East Coast. This was a first of a novel category of botnets that exploit IoT device & systems, turning IoT devices that ran a Linux operating system into a remotely controlled bots that can be used as port of a botnet in large scale network attacks. It primarily targets online consumer devices such as: IP cameras and/or home routers. Mirai has two core purposes to locate and compromised IoT devices to further grow the botnet and launch DDoS attacks based on instructions received from a remote command and control. Mirai performs wide-ranging scans of IP addresses, continuously scan the internet of the IP address of IoT devices. Yet, there is hardcoded list of IP address ranges which Mirai bots are programmed that it will not infect during scans. These addresses belong to the US Postal Service, the Department of Defense, the Internet Assigned Numbers Authority (IANA), Hewlett-Packard and General Electric.
Mirai identifies Locating under-secured IoT devices that could be remotely accessed vulnerable IoT devices using a table common factory default username & passwords, and log into them to infect them with the Mirai malware. Attack function enable it to HTTP flood and OSI layers 3 to 4. DDoS attacks when attacking HTTP floods, Mirai bot hide behind default user-agents. Infect devices will continue to function normally, except for occasional sluggishness, and an increased use of bandwidth.
If an IoT device becomes infected with the Mirai, an administrator should immediately disconnect the device from the network, then reboot the device. Since Mirai malware exists in dynamic memory rebooting the device clears the malware. Afterwards ensure that the previous password for accessing the device has been changed to a strong password. If you reconnect before changing the password, the device could be quickly infected again with the Mirai malware.
VII. Countermeasures/ Protection Techniques of IoT Devices
The following are basic protection techniques suggested by the Cybersecurity and Infrastructure Security Agency (CISA) that would provide basic IoT security protection against a 3rd party or hostile attacker. A many IoT devices might not have powerful processors or enough memory to have an intrusion-detection analysis will likely occur at a gateway device.
An IoT device owner should stop using default/generic passwords and disable all remote (WAN) access to the device. Ensure that all default passwords on the IoT devices have been changed. Updating devices with security patches from the manufacture, when available. Even if a device has a have known software vulnerabilities, patches or work arounds might not be downloaded for a very long period; thus intrusion-detection technique becomes more important.
Device administrators must disable universal plug and play (UPnP) on routers, unless necessary. Lastly, a networking administrator should monitor port 48101 for suspicious traffic on as infected devices often attempt spread malware by using this port to send results to a 3rd party or threat actor. Also, monitoring TCP ports 23 and 2323 for 3rd party to attempts to gain unauthorized control over IoT devices using the network terminal.
As more developers and vendors push new IoT devices and services to the Internet this will lead to the discovery of new IoT threats and attacks against users and systems to control a system or steal data. Most IoT system are designed to a specific application or service and testing the security of the system might be complex and time consuming, but it’s necessary as the number of new devices deployed to the Internet increases each week. Some security threats might not be as widely recognized or known as other are, but new threats to IoT devices and application should be made aware by security professionals and publicly available to developers and administrators to minizine the scope of possible risk to users and devices.
Bertino, E., & Islam, N. (2017, February 2017). Botnets and Internet of Things Security. Computer, 76-79.
Cybersecurity and Infrastructure Security Agency. (2017, October 17). Alert (TA16-288A) – Heightened DDoS Threat Posed by Mirai and Other Botnets. Retrieved from Cybersecurity and Infrastructure Security Agency: https://www.us-cert.gov/ncas/alerts/TA16-288A
Microservices architecture is a style that structures an application as a collection of services. This breaks all processes into its own service, where each service has its own container with its own data storage, does not share data. The inverse is monolith architecture which builds all capabilities into a single executable and process. This is a server-side system based on a single application to develop and manage.
Microservices implements smart endpoints uses no complex middle-ware the brain lay in the application and the network just help to route information.
Some characteristics of microservices architecture are:
Componentization via services
Organized around business capabilities
Decentralized data management
Designed for failure
Advantages of using microservices are:
A team can choose an any language for the service
Less risk in change
No availability when other services failed
Some availability when other services fail
What is a chaos monkey?
A chaos monkey is a tool that randomly stops services in the infrastructure during the data, while services are being monitored. Since Failure will happen in any disturbed services having a chaos monkey will force developers to anticipate how that failure would happen and how it will be handled. Since failure will happen in any distributed system telling a chaos monkey into an infrastructure will make people more aware of the fact that things will break by forcing it to happen, then monitoring and recovery can handle the event. This effects how to code is designed and written to become more robust. This is chaos engineering which is the discipline of experimenting on a software system in production in order to build confidence in the system’s capability to withstand turbulent and unexpected conditions.
Conway’s law states that an “organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.” This is based on the reasoning that for a software module to function, multiple authors must communicate frequently with each other. Thus, the software interface structure of a system will reflect the social boundaries of the organization that produced it. In microservices, there is a lot of variation on how big the size of each team and the number of services to support should be.
A model is a way of expressing a software design in some form of abstract language or pictures. One person’s representation of an object might differ from someone else’s. Developers try to design different designs various of a model and decide which will be best for the final solution. An object modeling language such as UML is used to develop and express the software design. Designing a model can be challenging depending on how complex or how many features it may have. There might be dozens of models to choose from. An effective modeler needs to choose the appropriate ones.
What is Analysis?
Analysis is a process of discovery whose purpose is to understand the customer’s needs and requirement An Analysis must describe the functional specification about what precisely what the system must do to meet its requirements.
How a system should work
Identify classes and their relationship and behavior
What is Design?
Design is a blueprint to produce a solution to a problem, summarized by the requirements specification. It should always describe how the system is to perform its tasks to meet the specification. The design of algorithms takes in to account the details of each component, using a programming language or pseudo code.
Software engineering process framework activities are complemented by several umbrella activities. The umbrella activities of a software process are:
Software tracking and control
Using project management software during the lifespan of the project helps to monitor work on each module, forecasting the development time, estimating required human and technical resource hours, and calculating a critical path for the project.
Risk is an event that may or may not occur. Risk management is the process for identifying, assessing, and prioritizing risks of different types which might endanger the assets and earning capacity of a business. Once a risk has been identified, the risk manager will create a plan to minimize or eliminate the impact of negative events. Many project managers recognize that risk management is important because achieving a project’s goal depends on planning, preparation, results, and achieving strategic goals.
Software quality assurance (SQA)
Quality is defined as the sum of the total characteristics of a software entity that bears on its ability to satisfy or implied needs. The purpose of software project quality management is to ensure that the project will satisfy the needs for which it was undertaken. Managing the quality of a software project and its development processes must meet the requirements and satisfy the user’s experience.
A software package must conform to the written requirements of the project’s processes and deliverables. When the project is “fit for use,” the product can be used as it was intended, ensuring the product will satisfy the needs for which it was developed for. In the end, the customer will decide if the quality of the software is acceptable or not.
Formal Technical Reviews (FTR)
After completing each module, it is good practice to review the completed module conducted by the technical staff. The purpose is to detect quality problems and suggest improvements before they propagate to the next activity. The technical staff will focus on the quality of the software from the customer viewpoint.
Project and product measures are used to assist the software team in delivering the required software. This helps to ensure the stakeholder’s requirements are met. Since software itself cannot be measured directly, it is possible to measure a project by direct and indirect factors. Examples of direct measurement are cost, lines of code, size of software. An example of an indirect measurement would be the quality of software.
Software configuration management (SCM)
A set of activities designed to control changes by identifying parts of the system that are likely to change, establishing relationships, defining mechanisms for managing different versions of the project.
Establishment of baselines.
Re-usability measurement defines a criterion for product reuse. Backing up each part of the software project can be corrected, or any kind of support can be given to them later to update or upgrade the software.
Work product preparation and production (models, documents, logs, forms, lists)
Documents that include project planning and activities to create models, documents, logs, forms, and lists are carried get started here.
A framework is a Standard way to build and deploy applications. The Software Process Framework is a foundation of complete software engineering process. A generic process framework encompasses five activities which are given below one by one:
Communication: Understand stakeholder intent and requirements.
Always prepare for meetings. If the project domain is unfamiliar, do some research on the topic and provide the findings to other members of the team so everyone understands the topics discussed. Appoint a person in charge of every meeting, prior to each meeting. Face-to-face meetings work better, but sometimes not always an option. Take notes, meeting minutes, and document decisions. Distribute notes and decisions to attendees. Upload the documents to a share network resource, in case you need to review prior decisions and notes. Always stay focused throughout the meeting; modularize the discussion. Once something is agreed to, move on; move on even if not agreement, or no clarity (table item). Listen to the needs of the client, do not start to form answers until they are finished. Thus, there is less of a change you might miss an important detail. Use professional body language: do not roll your eyes nor shake your head.
Project management software can be used during the lifespan of the project to monitor work on each module, forecasting the development time, estimate required human and technical resource hours, and calculating a critical path for the project. Over planning wasted time, and under planning creates chaos. Work with the stakeholder to understand what is and is not in the scope of project. Involved stakeholders in discussions, when appropriate. They will help to set priorities, deadlines, which keeps the project on time and on budget. Iterate on the planning, don’t try to do everything at once. Break down tasks into small tasks and have plan how to complete each one. Incorporate risk management throughout each phase of the project; consider what risks are possible as you plan and be realistic.
Managing the quality of a software project and its development processes must meet the requirements and satisfy the user’s experience. Define how you will achieve quality and accommodate change. Track the project of each iteration of the project often; make adjustments each time plan reviewed. Don’t leave team members out of planning.
Adapt models from another project to this one, making needed changes. Explain why you are building each model and build a useful model, not perfect model. Thus, the built model so they can be changed. Do not create models that are not needed, this may waste time and take the project out of scope. When presenting a model ask for feedback on the model by all team members. Encourage team members to share feeling about the state of the model. Ask them what they like and what they do not like. Using structure charts during the implementation of a system will show users the program modules and the relationships among them. Structure charts consist of rectangle that represent the program modules, with arrows and other symbols that provide additional information. This helps when the analysts and programmers understand the purpose of each module during the designing and testing each module before being run on the entire system.
Programmer can use the principles of software engineering to manage and improve the quality of the finished system. Working from a specific design, a program will use the purposed programming language to transform the program logic into a program module which the rest of the group can work on simultaneously. To simplify the integration of system components and reduce code development time an integrated development environment (IDE) will make programming easier. Coding each program must be tested to make sure it will function correctly, after the programs will be tested in groups, and finally running the entire system.
Before you write first line of code, make sure you understand problem to be solved and the design principles. Choose the best programming language for this problem, then select programming environment. Select data structures to match design and keep conditional logic simple and minimal. Follow coding standards at your site for variable names, this will help ensure the code can be easily maintained in the future. Write self-documenting code to help yourself and other developments. Create unit tests to help reduce bug and problems in the software. Create nested loops so they can be easily tested. A successful test is one that discovers a bug.
Once delivered to the client see if the client can provide feedback on how the application is work. Using the basis of the clients feedback we modify the products for supply better product. To ensure a successful deployment, if possible, try to use a staged deployment model:
#1- Development: A local developer’s workstation.
#1 – Testing: An integration environment where developers merge changes to test that they work together to create a working application. Unit testing will need to be performed on an individual program or modules. This will help to identify and remove execution errors that could cause the program to terminate abnormally, and logic errors that could have been missed during the desk checking. Integration testing will need to occur on two or more programs that might depend on each other.
#3 – Staging: This is where tested changes are run against a production-equivalent environment with data to ensure the application will work properly
#4 – Production – The live production environment.
If an environment of stage is not kept up to date this can lead to out of data testing and possible incorrect environment factors causing promoting a model or application to fail, thus causing rollback in development.
Quality is defined as the sum of the total characteristics of a software entity that bears on its ability to satisfy or implied needs. The purpose of software project quality management is to ensure that the project will satisfy the needs for which it was undertaken. Managing the quality of a software project and its development processes must meet the requirements and satisfy the user’s experience. Businesses often make quality management a serious discipline a major component of their IT risk reduction strategy.
A software package must conform to the written requirements of the project’s processes and deliverables. When the project is “fit for use” is when the product can be used as it was intended, ensuring the product will satisfy the needs for which it was developed for. In the end the customer will decide of the quality of the software is acceptable or not. Design of experiments is a technique that helps to identify which variables have the most influence on the overall outcome of a process for understanding which variables affect the final outcome is an important part of quality planning.
There are many scope aspects of IT projects that affect the quality of a project are. The biggest aspect is the system’s functionality, the degree to which a system performs its intended function. A system must contain the features that have characteristics which are appealing to the intended user. After the user interacts with a system it must generates outputs which are shown on the screens and reports, it is important to define clearly what the screens and reports look like for a system. A system’s performance will address how the software package will perform to the customer’s intended use. To design a system with high-quality performance, project stakeholder must address many issues. Reliability is the ability of a product or service to perform as expected under normal conditions. Lastly, maintainability addresses the ease of performing maintenance on a product. If a software system breaks down after being implementing within a system, project team must review the project to fix the problem to give the user a future satisfying experience.
Planning quality management is the ability to anticipate situations and prepare actions that bring about the desired outcome. Having a plan for the software development life span help to stop a lot of the issues which comes from developing a software system. The first step is to define the quality to match the requirements of both the business and achieving a satisfying user experience. The second step is to broadcast a simple quality metrics, this helps to reduce defeats, as well as keeping the quality on the minds of the whole development team and expose when efforts fall short. Another possible step it to keep fine tuning the project’s goals to make sure the requirements are being meet and each user test of the system has a satisfying experience. Getting the requirements right the first design phase means the software would need to be reworked less and less retesting and troubleshooting code to test for bugs. Designing a simple application helps to lessen the risk for bugs and is easier to test and rework if needed.
What is Ransomware? By now, most of us know that it ruins your files – mostly pictures, documents and other personally created items by encrypting them and asking for a “ransom” to decrypt them. Ransomware is a vicious malware that locks users out of their devices or blocks access to files until a sum of money or ransom is paid. Attacks cause downtime, data loss, major financial loss and intellectual property theft. If an attack makes it through a corporation’s defenses to even one computer – it can affect the entire network by encrypting and locking out files on network shared drives used by all employees thus potentially spreading to all computers. Ransomware can enter the network via an email, computer, phone, tablet or a USB drive. A file that is on an infected computer or USB drive and transferred to another computer can also cause infection and propagation. Social media sites can also be used to transfer malware via links and file attachments.
95% of security breaches are due to human error and lack of employee training and knowledge
What can you do?
– Be suspicious of every email you receive. Treat each as a potential threat especially if there are attachments and links included
– All of your computers including MAC computers should have a reputable antivirus service
– Don’t share thumb drives on computers you don’t know or if you are not sure they are “clean”
– Turn on the Windows firewall on your PC
– Backup your data and secure those backups.
– Be wary of what you are downloading on your computer
Quote to Ponder: “Live in the sunshine, swim in the sea, drink the wild air…” Ralph Waldo Emerson
Cybersecurity, or computer security, is the protection of computer systems from the theft or damage to their hardware, software, of electric data, as well as form the disruption or misdirection of the services they provide. The field is becoming more important due to increased reliance to computer systems, the Internet, wireless networks, and the growth of “smart devices.”
Vulnerabilities and attacks
A vulnerability is a weakness is design, implementation, operation or internal control. Most of the vulnerabilities that have been discovered are documented in the common vulnerabilities and exposures (CVE) database. An exploitable vulnerability is one for which at least on working attack or “exploit” exists. Vulnerabilities are often hunted or exploited with the aid or automated tools or manually using customized scripts. To secure a computer system, it is important to understand the attacks that can be made against it. Some of the Benefits of cybersecurity are:
Business protection malware, ransomware, phishing, and social engineering
Protection for data and networks
Prevention of unauthorized users
Improves recovery time after a breach
Protection for end users
Improved confidence in the product for both developers and customers
Types of cybersecurity Threats
The process of keeping up with new technologies, security treads and threat intelligence is a challenging and ongoing task.
#1 Ransomware: A type of malware that involves an attacker locking the victim’s computer system files, typically through encryption, and demanding a payment to decrypt and unlock them. Paying the ransom does not guarantee that the file wills be recovered, or the system stored.
#2 – Malware: Any file or program used to harm a computer user or gain unauthorized access.
#3 – Social Engineering: An attack that relies on human interaction to trick users into breaking security procedures in order to gain information that is typically protected
#4 – Phishing: A form of fraud where fraudulent emails are sent that resemble emails from reputable sources; however, the intention of these emails is to steal sensitive data. This is the most common type of cyber-attack.
Tenets of Information System Security
There are three protections that must be extended over information: confidentiality, integrity, and availability (CIA).
#1- Confidentially (Data Privacy)
It is important that only approved individuals can access important information, thus protecting the information from everyone except those with rights to access it. Implementing security controls to help reduce the risk of data leaks by Defining a set of rules that limits access to only authorized users can view information.
Putting an Information Technology security policy framework in place that outlines an identifies where security controls should be used. Protecting private data is the process of ensuring data confidentiality. Organizations must use proper security to this concern. Adopting a data classification standard that defines how to treat data throughout an IT infrastructure.
Private data of individuals
Intellectual property of businesses
Keep data private
#2 – Integrity (Validity and accuracy of data)
Integrity ensures that the information is correct and no unauthorized person or malicious software has altered the data. Data lacking integrity (inaccurate and not valid) are of no use to user and organizations.
Only authorized users can change information
Assurance that the information is trustworthy and accurate
#3- Availability (Data is accessible)
Information is accessible by authorized users whenever they request the information and has value if the authorized parties who are assured of its integrity can access the information. Also, information cannot be locked so tight that no one can access it.
Coupling is the degree of interdependence between software modules, and how closely connected the relation of two modules are. Modules should be as independent as possible from other modules, thus any changes to modules do not heavily impact other modules. Implementing a higher coupling relationship in modules would they are dependent on the inner workers of other modules. This would make changes harder to coordinate and increasing probably of functionality errors. By using low coupling, is often a sign of a well-structured system design. Changes are easier made to the internals of modules without the possibility of impact on other modules in the system. Code is easier to write, test and maintain since modules are not interdependent on each other.
Types of Coupling:
Data Coupling – the components are independent to each other and communicating through data.
Stamp Coupling – the complete data structure is passed from one module to another module.
Control Coupling – if the modules communicate by passing control information, then they are said to be control coupled
External Coupling – the modules depend on other modules.
Common Coupling – the modules have shared data such as global data structures.
Content Coupling – one module can modify the data of another module or control flow is passed from one module to the other module.
Cohesion often refers to the degree to which the elements of a module belong together. Everything in a module is meant to work together. The elements within the module are directly related to the functionality that module is meant to provide. Software can easily design, write, and test our code since the code for a module is all located together and works together. Low cohesion would mean that the code that makes up some functionality is spread out all over the code-base.
Types of Cohesion:
Functional Cohesion – every essential element for a single computation is contained in the component.
Sequential Cohesion – an element outputs some data that becomes the input for other element
Communicational Cohesion – two elements operate on the same input data or contribute towards the same output data
Procedural Cohesion – elements of procedural cohesion ensure the order of execution.
Temporal Cohesion – elements are related by their timing involved.
Logical Cohesion – elements are logically related
Coincidental Cohesion – the elements are not related
A class to open a bank account would not try to handle both a savings account and checking account opening. High cohesion would say those go into different classes, yet low coupling says modules should not be dependent on each other.
In a higher cohesion software design was chosen implementing a class for checking and another for savings, would allow for an easier maintain of the bank account module. Also adding future features to one or two both classes, would be able easier to accomplish and less prone to software logic errors.
In a low coupling design, all states and behaviors of each account would have to be implementing. This would create a very lengthy module, and thus harder to maintain for future developers