Internet of Things (IoT) Security

Internet of Things Security

There is not a distinctive standard explanation of what exactly what the Internet of Things (IoT) is. Most professionals define the term as the connection of diverse devices that can provides or request a service over the Internet to enable human-to-thing, thing-to-thing, and thing-to-things for the transmission of data. There are many ways that IoT applications are improving everyday life. Vehicles are now being equipped with small IoT devices that enable vehicles to downloading roadmaps with updated traffic information and protection against auto theft. Even are buildings are having IoT device installed with sensors that allow users to remotely control a building’s energy consumption to different systems such as lights and air conditioners based on preferences. Even many household items are sold being sold with their own embedded processing unit which enable product to have IoT abilities.

The concept of what IoT as systems is composed of has caught the attention of many people from academic and industry. The IoT reference model has been used to explain the each of the different sections within an IoT system ranges from three to seven different levels. The first reference model for IoT system consisted of three levels and described IoT as a system of Wireless Sensor Networks (WSNs).

  1. Application
  2. Cloud server
  3. WSN

The second model proposed model has five-levels and reduces the complexity during interactions between different sections of the model, resulting in simpler applications with well-defined components. The current model created by CISCO in 2014 extends the previous models into seven different levels, where the flow of data has a dominate direction depending on the type of application. The first three levels of the model are grouped into the Edge-side layer.

  • Level 1 consists of edge devices computing nodes such as: smart controllers, sensors, and RDIF readers.
  • Level 2 consists of the many communication components that enable the transmission of data or commands.
  • Level 3 is the edge ( or fog) computing level. This is where simple data processing starts to reduce the computation load in the upper levels, producing a faster response.

The next three layers are grouped into the server or Cloud-side layer.

  • Level 4 reduces the amount of data in motion to resting state by filtering and selective storing network packets to database tables.
  • Level 5 the information becomes abstract to provide the ability to render and store data allowing more efficient and simpler data processing.
  • Level 6 the information can be interpreted in application for marketing, academic, and industrial needs.

The final group contains only level 7, this is where users interact with the data using application from the IoT node data.



The motivations of potential attackers who launch attacks against IoT devices and systems might include the stealing of sensitive data or compromising IoT component. The vulnerabilities for IoT devices at the first level start with hardware Trojans. These are a major concert for IoT integrated circuits since an attacker can use the circuit to exploit a nodes functionality to get access to data or software running on integrated circuits. This might happen one of two ways:

  • Externally activated trojan by an antenna or sensor
  • Internal-activated trojan once a certain condition is met within the integrated circuits

Non-network side-channel attacks in edge node may reveal critical information under normal operation even when a node is not current using any wireless communication to send or receive data. Lastly, a Denial of service (DoS) attacks can occur against IoT devices and the three main types of attack are: battery draining, sleep deprivation, and outage attacks.

  • In a batter draining DoS attack, an attacker will send many packets to a node forcing it to run varies system checks repeatedly. Since nodes tended to be very small, carrying small batteries with limited energy capacity.
  • In a Sleep deprivation attack, an attacker will attempt to send a chain of request to a node that will appear to be legitimate. Since most IoT nodes are battery-powered node with a limited energy capacity.
  • When a possible outage attacks occurs, an edge node stops performing at normal operating. However, this may be as a result of an unintended error or a system issue.

Implementing RFID tags in IoT device at the edge node level requires all such RFID tags to provide a unique identifier that any nearby RFID reader can read. The tag that is attached to a product or an individual making creating tracking information. Certain types of tags can carry information about the product or individual it is attached to making a node easily inventoried for a third party. The electronic product code (EPC) tags contains two custom fields that create privacy concerns for users: the manufacturer and product code.

The scope of attacks at the communication level of the reference model an attacker might consider for reconnaissance is network eavesdropping or packet sniffing. This occurs when an attacker deliberately listening to private conversion over system communication links. This can prove an attacker with invaluable information when the data is unencrypted or sent in plaintext. Data contained within a network packet might contain the following:

  • Usernames & passwords
  • Shared network passwords
  • Node configuration

A side-channel attack is not easy to implement but are powerful attack against encryption algorithms. This type of attack can be launched from the both edge node and communication levels. However, when a side-channel attack is launched from the communication level are not easily defended against since this method is non-invasive and undetectable. Another possible attack at this level is the injection of fraudulent packets into communication links by inserting new packets in networking or the capturing networking packets then manipulation of the data containing with.

There are new and emerging challenges to securing IoT systems such as dramatic increase in the number of weak links and unexpected uses of data. The dramatic increase in the number are as a result of the special characteristics of devices and cost factors by device manufactures such as compact battery-powered devices with limited storage and computation resources, many market devices are not able to support secure cryptographic protocols. Lastly, the unexpected uses of data from environment or user-related data collection by Internet sensors from present computing enabled by IoT technologies has led to the unwelcome influence of Internet-connected sensors in everyday living around create privacy concerns with users.

As more developers push new IoT devices and services to the Internet this will lead to the discovery of new IoT vulnerabilities and attacks against users and systems. Most system are designed to a specific application or service and testing the security of the system might be complex and time consuming but is necessary as the number of new devices deployed to the Internet by manufactures increases each week. Some security threats might not be as widely recognized other are, but new threats to IoT devices and application should be made addresses both by security professionals and developers to minizine the scope of possible risk to users and devices.



MOSENIA, A., & JHA, N. (2017). A Comprehensive Study of Security of Internet-of-Things. IEEE Transactions on Emerging Topics in Computing, 586-602.



What is the Internet of Things (IoT)?

I. Introduction

The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines equipped with unique identifiers (UIDs) which have the ability to collecting, sharing, and analyzing data over a networking without requiring human-to-human or human-to-computer interactions. It is an interconnection of heterogeneous entities where the term “entity” refers to a human, sensor, or anything that may request or provide a service. As more wireless networks come online, the total number of IoT devices around the world will only expand the scope of IoT devices and applications. Vendors are now leveraging IPv6 address schemes with high-speed Internet connections to improve the design and performance of IoT devices, thus creating an increased growth and demand for new IoT products.

IoT is playing a key role in transforming everyday life with a greater connectivity and functionality generating data faster than most applications can process and filter. By combining these connected devices with automated systems, it is possible to gather information, analyze it, and create an action or event to help someone with a task or learn from a process. However, many IoT devices have several operational limitations on the computational power available to them. These constraints often make them unable to implement basic security measures and have a low price and consumer focus of many devices makes a robust security patching system uncommon.

The scope of IoT applications has opened the door for many new business opportunities and revenue streams. Many businesses who implement IoT services to have a better view of operational expenses creating a better marketing insight based on consumer behavior and product placement. This can lead to a reduction in the total time it many take for a product to be available to a consumer. IoT also offers businesses just-in-time training for employees to improve labor efficiency to increasing organizational productivity.  Logistics and supply chains are improved with IoT by creating a unique identifier for individual items from supply chains to make intelligent choices on how to deliver goods and services more efficiently to consumers. IoT helps manufacturing companies to measure a product’s performance, diagnose errors, and improve a product’s quality, performance, and support.

II. IoT reference model

The initial proposed IoT reference model consists of three levels and represents IoT as an extended version of wireless sensor networks (WSN).

Level Description
3 Applications
2 Cloud Services


In 2014, a new IoT Reference Model was created by Cisco and consists of the following seven levels and has data generally flowing in a bidirectional manner.

Level Description Layer Abstraction
7 Collaboration and processes (People & Business Processes) User-side
6 Application (Reporting, Analytics, Control) Server/Cloud-side
5 Data Abstraction (Aggregation & Access)
4 Data Accumulation (Storage)
3 Edge Computing (Data Analysis & Transformation) Edge-side
2 Connectivity (Communication & Processing Units)
1 Physical Devices & Controllers (Devices)


Level 1 – This level is concerned with physical devices at the edge-side, this contains the physical devices such as: smart controllers, sensors, and RFID reader. Data confidentiality and integrity is considered from here upward.

Level 2 – This level contains all communication and processing units that enable the transmission of data or commands by using routing and switching protocols. Communication happens between IoT devices in the first level and components in the second level, including communication across data networks.

Level 3 – Edge Computing, is simple data processing that is initiated and is essential for reducing computation loads in the higher levels as well as providing a fast response to events. Learning algorithms are implemented at this level.

Level 4 – Data Accumulation, data is combined from multiple sources to enable the conversion of data in motion to data at rest. At this level, data is converted into a format from network packets to database tables then is determined if it’s of interest to higher levels through filtering and selective storing for future analysis or shared with high levels computing servers.

Level 5 – Data Abstraction, this provides the opportunity to read and store data such that further processing becomes simpler or more efficient. Services at this level may include data normalization/denormalization then indexing and consolidating data into one place with access to multiple data stores.

Level 6 – Applications information interpretation and software cooperates with data accumulation and data abstraction levels.

Level 7 – This level involves users and business processes using IoT applications and their analytical data to make informed choices.


III. Fog and Edge Computing in IoT

IoT vendors are implementing edge and fog computing technology to providing enhanced data analysis and management to increase the scope of possible IoT applications. In computer networking, the control plane is the part of the router architecture that is concerned with the network topology or the information generated in the routing table that defines what to do with incoming packets. The data plane is the part of the software the processes the data requests. Fog computing is a standard that defines how edge computing should work and it facilitates the operation of computation, storage, and networking services between IoT devices and cloud computing centers. This enables computing services to reside at the edge of the network as opposed to servers in a data center. Whereas, the control plane is the part of the software that configures, and shuts downs the data plane. In Fog computing, there is only one centralized computing device responsible for processing data from different endpoints in the networks. This style of architecture uses edge devices to carry data out from substantial amount of computation storage and communication locally then sending it over the Internet backbone. Fog computing can be perceived both in large cloud systems and big data structures, referring to the growing difficulties in accessing information objectively. This brings data closer to the user as compared to storing data far from the end point in data centers, providing location awareness, low latency, and improves the overall quality of service.

Edge computing is located at the edge of the network, this how IoT data is collected and analyzed directly by controllers or sensors then transmitted to a nearby computing device for analysis. This brings processing closer to the data source and does not need to be sent to a remote cloud or other centralized system for processing. This eliminates the distance and time it takes to send data to a centralized source, which improves the speed and performance of data transport, as well as devices and applications on the edge. Instead of completely depending on a cluster of clouds for computing and data storage, edge computing can prove intelligent services by leveraging local computing and local edge devices. Edge computing applications can pre-process, filter, score, and aggregate data.

Edge Computing Fog Computing
Pushes communication capabilities, processing power, and intelligence data directly into devices; programmable automation controllers Pushes intelligence data to the local area network and processes data either in IoT gateway or a fog node.


IV. The Vulnerabilities of IoT

Security is a significant challenge for company to adopt and deploy IoT innovations. There is not much motivation for vendors to change with little or no consequences for selling insecure devices since device can be manufactured very cheaply and are not maintained with regular patches and updates by vendors. An example of a major security concert for integrated circuits is hardware trojans. A malicious modification of an integrated circuit (IC) enables an attacker to use the circuit or exploit its functionality obtain access to data or software running on the integrated circuitry.

  • Externally Activated (Antenna or sensor)
  • Internally Activated (Given Condition; Logic)

IoT systems are higher security risk for several other reasons: insecure network interface or services, insufficient authentication/authorization. These systems might include data or services that were not designed to be connected to the global Internet. These systems may not have a well-defined perimeter and are continuously changing due to device and user mobility.

IoT systems are highly diverse in character with respect to communication medium and protocols, platforms and devices. As a result, IoT systems, or portions, could be physically unprotected and/or controlled by different parties. Also, IoT devices could be autonomous entities that control other IoT devices. Routing Attacks against IoT network will affect how packets are routed in by being spoofed, redirected or misdirected to another network. An attacker can inject fraudulent packets into communication links using three different methods: insertion, manipulation, or replication.

There are several communication vulnerabilities in IoT devices sometimes as a result of a lack of transport encryption/integrity verification this may cause packet being intentionally listening to private conversions over the communication lines by a third party. As a result, there are several privacy concerns with IoT devices such as: Insufficient security configurability, insecure software/firmware, and poor physical security.

DOS Attacks is standard attacks used against IoT devices that jams the transmission of radio signals by either continuous jamming by blocking all transmissions or intermittent jamming by reducing the performance of systems. There are three well know types of DOS attacks against edge computing nodes: battery draining, sleep deprivation, and outage attacks. When a DoS battery draining attack happens nodes usually must carry small batteries with very limited energy capacity. In a Sleep Deprivation attack, the victim is a battery powered node with a limited energy capacity the attacker attempts to send an undesired set of requests that seem to be legitimate. Lastly an outage attacks happens when an edge node outage occurs when an edge device steps performing its normal operations

 V. Botnets and Internet of Things


A botnet is a robot network of compromised machines, or bots, that run malicious, or bots, that run malicious software under the command-and-control of a bot master. Bots can automatically scan entire network ranges and propagate themselves using known vulnerabilities and weak passwords an on other machines. Once a machine is compromised, a small program is installed for future activation by the bot master, who at a certain time can instruct the bots in the network to execute actions. A network of infected machines or bots (zombies) that has a command-and-control infrastructure and is used for various malicious activities. Botnet architecture has evolved over time in an effect to evade detection and disruption. Bot programs are constructed as clients with communicate via existing servers. This allows the bot master to perform all control form a remote location, which obfuscates their traffic in a Client-Server or Peer-to-Peer network.

Once the software is downloaded, the botnet will now contact its matter computer and let it know that everything is ready to go. An individual botnet device can be simultaneously compromised type of attack and often at the same time. Servers may choose to outline rules on the behavior of internet bots. This informs the web robot about which areas of the website should not be processed or scanned.

The text file, robots.txt, is normally place on the root of a webserver to govern a bot’s behavior on that server, then it can be used by search engines to categorize websites. Robots that choose to follow the instructions try to fetch this file and read the instructions before fetching any other file from the website. If this file does not exist, web robots assume that the website owner is not wishing to place any limitations on crawling the entire site.

Botnets can be used to perform Distributed Denial of Service (DDoS) attacks, steal data, send spam, allow an attacker to access the devise and its connection, or mine cryptocurrency. A Distributed Denial of Service (DDoS) attack is a malicious attempt to make a server or a network resource unavailable to users. It is achieved by saturating a service, which results in its temporary suspension or interruption. The goal of the attacks is to overwhelm a target application with an extreme number of requests per second (RPS) with high CPU and memory usage. A single machine used to either target a software vulnerability of flood a targeted resource with packets, requests, and queries. Application layer type DDoS attacks occur by Http floods, slow attacks, or Zero-day assaults.


Network layer DDoS Attacks
UDP Floods Gigabits per second (GPS)
SYN Floods Packets per second (PPS)
NTP Amplification Consume the targets upstream bandwidth
DNS Amplification  


VI. The Mirai Botnet

On October 12th, 2016, a massive DDoS attack left much of the internet inaccessible on the United States East Coast. This was a first of a novel category of botnets that exploit IoT device & systems, turning IoT devices that ran a Linux operating system into a remotely controlled bots that can be used as port of a botnet in large scale network attacks. It primarily targets online consumer devices such as: IP cameras and/or home routers. Mirai has two core purposes to locate and compromised IoT devices to further grow the botnet and launch DDoS attacks based on instructions received from a remote command and control. Mirai performs wide-ranging scans of IP addresses, continuously scan the internet of the IP address of IoT devices. Yet, there is hardcoded list of IP address ranges which Mirai bots are programmed that it will not infect during scans. These addresses belong to the US Postal Service, the Department of Defense, the Internet Assigned Numbers Authority (IANA), Hewlett-Packard and General Electric.

Mirai hardcoded list of IP addresses
The hardcoded list of IP address ranges which Mirai bots are programmed that it will not infect during scans

Mirai identifies Locating under-secured IoT devices that could be remotely accessed vulnerable IoT devices using a table common factory default username & passwords, and log into them to infect them with the Mirai malware. Attack function enable it to HTTP flood and OSI layers 3 to 4. DDoS attacks when attacking HTTP floods, Mirai bot hide behind default user-agents. Infect devices will continue to function normally, except for occasional sluggishness, and an increased use of bandwidth.

If an IoT device becomes infected with the Mirai, an administrator should immediately disconnect the device from the network, then reboot the device. Since Mirai malware exists in dynamic memory rebooting the device clears the malware. Afterwards ensure that the previous password for accessing the device has been changed to a strong password.  If you reconnect before changing the password, the device could be quickly infected again with the Mirai malware.


VII. Countermeasures/ Protection Techniques of IoT Devices

The following are basic protection techniques suggested by the Cybersecurity and Infrastructure Security Agency (CISA) that would provide basic IoT security protection against a 3rd party or hostile attacker. A many IoT devices might not have powerful processors or enough memory to have an intrusion-detection analysis will likely occur at a gateway device.

An IoT device owner should stop using default/generic passwords and disable all remote (WAN) access to the device. Ensure that all default passwords on the IoT devices have been changed. Updating devices with security patches from the manufacture, when available. Even if a device has a have known software vulnerabilities, patches or work arounds might not be downloaded for a very long period; thus intrusion-detection technique becomes more important.

Device administrators must disable universal plug and play (UPnP) on routers, unless necessary. Lastly, a networking administrator should monitor port 48101 for suspicious traffic on as infected devices often attempt spread malware by using this port to send results to a 3rd party or threat actor. Also, monitoring TCP ports 23 and 2323 for 3rd party to attempts to gain unauthorized control over IoT devices using the network terminal.

Service Port
SSH 22
Telnet 23
IP 2323
IP 48101


VIII. Conclusion

As more developers and vendors push new IoT devices and services to the Internet this will lead to the discovery of new IoT threats and attacks against users and systems to control a system or steal data. Most IoT system are designed to a specific application or service and testing the security of the system might be complex and time consuming, but it’s necessary as the number of new devices deployed to the Internet increases each week. Some security threats might not be as widely recognized or known as other are, but new threats to IoT devices and application should be made aware by security professionals and publicly available to developers and administrators to minizine the scope of possible risk to users and devices.



Bertino, E., & Islam, N. (2017, February 2017). Botnets and Internet of Things Security. Computer, 76-79.

Burgess, M. (2018, February 16). What is the Internet of Things? WIRED explains. Retrieved from WIRED:

Cisco. (2014). The Internet of Things Reference Model. Cisco.

CLOUDFLARE. (2020). What is a DDoS Attack? Retrieved from CLOUDFLARE:

Cybersecurity and Infrastructure Security Agency. (2017, October 17). Alert (TA16-288A) – Heightened DDoS Threat Posed by Mirai and Other Botnets. Retrieved from Cybersecurity and Infrastructure Security Agency:

Fruhlinger, J. (2018, March 9). The Mirai botnet explained: How teen scammers and CCTV cameras almost brought down the internet. Retrieved from CSO:

Herzberg, B., Zeifman, I., & Bekerman, D. (2016, October 26). Breaking Down Mirai: An IoT DDoS Botnet Analysis. Retrieved from Imperva:

MOSENIA, A., & JHA, N. (2017). A Comprehensive Study of Security of Internet-of-Things. IEEE Transactions on Emerging Topics in Computing, 586-602.

Norton. (2020). What is a distributed denial of service attack (DDoS) and what can you do about them? Retrieved from Norton:

Shah, H. (n.d.). Edge computing and Fog computing for enterprise IoT. Retrieved from SIMFORM:



The American Civil War

War is an extremely serious event that occurs when an issue cannot be resolved in peace or compromise. Slavery was the issue of the mid-nineteenth century in America. The agrarian South wanted slavery maintained, and even expanded. The industrious North did not, promoting personal liberties and opportunity. Tension grew over the issue of slavery as America spread throughout the west. Ironically, the nation began breaking apart as one-by-one, southern states decided to secede into their own confederation, all united in slavery.

The newly elected President Abraham Lincoln worked diligently with Congress on possible scenarios to intervene or allow the institution of slavery to continue. Slavery had fulfilled a unique way of life to the Cotton States. It brought prosperity to its citizens. Many believed in their right to uphold slavery under the Constitution. Unfortunately, Lincoln understood the Constitution all too well. The carefully written manuscript did not address slavery. As any spreading disease, Abraham Lincoln believed it should not be encouraged for a young nation based upon freedom. As a competent leader, President Lincoln recognized diverse interpretations of the Constitution. However, in light of ongoing rebellion and secession, and for the sake of a nation’s integrity, he felt it necessary to resolve.

Abraham Lincoln
Abraham Lincoln, 16th president of the United States

Winning a war takes strength, strategy, a suitable battleground, and a firm conviction for success. Neither side was expecting war, nor were they wanting to do so. But the majority of Southerners thought it would be a quick victory as they easily captured the ill-equipped, federally occupied Ford Sumter off the South Carolina coast on April 12, 1861. With these first shots of the American Civil War, Lincoln concluded it would take more time, more resources, and more manpower to secure victory and unite the nation once again.

Our nation may not have been prepared to go to war although to some, it seemed a foregone conclusion. Yet the North was already in position to win the war. Essentially, the federal government had the money and resources to outfit and supply a successful war campaign. The Northern states were an industrialized culture with various types of mills and factories. The government maintained arsenals such as Liberty, Kansas, and they were also equipped to mass produce more guns and ammunition. In contrast, the Confederate South was primarily a society of farmers whose available tools and machinery supported an agrarian economy. There was but one manufacturer capable of producing heavy arsenal, located in the state of Virginia. The South could import weaponry from overseas unless they were blocked by the Union navy. Lack of munitions prompted desperation and creativity. Many volunteers supplied their own guns while others converted weapons from farm implements. Moreover, countless weapons were salvaged through Union capture or conquer.

The Civil War was fought in Southern terrain and along the extensive Atlantic coastline. The North already had access to over 300 vessels of various sizes and capabilities, naval shipyards, and the means to build more and repair as needed. The coastal region was difficult for the Confederates to defend as they scarcely owned or had limited access to warships. While they did import large ships from Britain, they again resorted to converting and outfitting available vessels, including tugboats and cutters for immediate battle. In fact, the Confederate Secretary of the Navy Stephen R. Mallory is credited for construction of torpedo boats and a submarine, the C.S.S. Hunley, which took down many vessels belonging to the North. In addition, Jefferson Davis solicited privateers to help capture additional ships for their cause.

American Civil War
American Civil War

The Union easily accessed the battleground via rivers such as the Tennessee and the Cumberland. Rivers and bridges were heavily patrolled with armed steamboats developed out of the Transportation Revolution. The steamboats supplied food and equipment to Northern soldiers. The modern railroad and telegraph were also used by the North. The Alleghenies of West Virginia provided railroad access, a great barrier, and gave the Union a strategic advantage over the Confederates. In addition, macadamized roads were much easier for Union soldiers to travel upon foot, as opposed to muddy gravel over difficult terrain that often wore Southern soldiers down. Without food and provisions, many Confederate soldiers became weak with hunger.

The availability of manpower was one of the most significant resources that brought the North to victory. In total population, the North outnumbered the South by 2 to 1, which was reflected in armed strength. There were career soldiers and volunteers. The North organized recruitment camps. The very first Union regiment came out of the state of Massachusetts. Northern soldiers organized for battle, security, and protection, especially at the rivers, railroads, and the area surrounding Washington D.C. When the South sabotaged telegraph lines, destroyed railroad bridges, or damaged ships, the North could send workers for repair and reconstruction. When Lincoln needed more men, he was able to order a new supply.

Each side could boast skillful leadership including commanding leaders Ulysses S. Grant and Robert E. Lee, highly trained at West Point but chose which side to fight, based upon loyalty. Training of soldiers, on the other hand, varied greatly. Conviction was noteworthy but sometimes questionable due to drunkenness and inappropriate behavior. Regardless, thousands of soldiers went into battle inadequately outfitted and ill-prepared. While attempting to reclaim western Virginia in 1862 against the North’s General George McClellan, Confederate soldiers were observed to be exposed and vulnerable. Furthermore, many were weak and sickened from disease.

Ulysses S. Grant and Robert E. Lee
Ulysses S. Grant and Robert E. Lee

Directing them all was President Abraham Lincoln who exercised his authority and knowledge of the Constitution, helping to facilitate a Northern victory. At the onset, he arrested underground secessionists and other defiant activity, holding them under Article I, Section 9. He imprisoned Southern privateers as well, deeming them rebels and pirates. After the Battle of Antietam, September 1862, Abraham Lincoln delivered the infamous Emancipation Proclamation. As of January 1, 1863, all slaves became free from any slave or rebellious state. The Proclamation not only released indentured laborers in the South, it allowed 186,000 newly freed males to enlist in the Civil War, providing additional military strength to the Union army.

The issue of slavery was at the core of the American Civil War. The South felt so strongly in their belief that they were willing to rebel, to secede in order to continue the traditional aristocratic life they had enjoyed. There was much at stake and they were confident they could win. Yet the South had no means of winning. The best they could hope for was to avoid great loss.

For Lincoln and much of the North, allowing slavery to continue was a violation of the Constitution. Their convictions lie not, as much, in taking slavery away, but upholding the Constitution, reinforcing the integrity of our forefathers’ vision, and securing a united nation. They had to go to war. It was not an easy victory. Hundreds upon thousands of lives were lost. In the Spring of 1865, the American Civil War ended as General Lee and the Confederate army surrendered. Abraham Lincoln did not live to see the end of the war, but history would still remember him as one of our nation’s greatest heroes.

Confiscation Act of 1862

Confiscation Act of 1862 was an updated version of the 1861 Act that gave the federal government the right to take away all property, which included slaves. This law was directed toward anyone who was considered a threat to Lincoln’s government or war effort. There was concern about federal government’s power in taking away personal freedoms or right to property. But it was an important step toward releasing the slaves from bondage and it added to the number of solders who could help fight in Lincoln’s army.

What is Microservice Architecture

Microservices architecture is a style that structures an application as a collection of services. This breaks all processes into its own service, where each service has its own container with its own data storage, does not share data. The inverse is monolith architecture which builds all capabilities into a single executable and process. This is a server-side system based on a single application to develop and manage.

Microservices vs. Monolith Architecture
Microservices vs. Monolith Architecture


Microservices implements smart endpoints uses no complex middle-ware the brain lay in the application and the network just help to route information.

Some characteristics of microservices architecture are:

  • Componentization via services
  • Organized around business capabilities
  • Decentralized data management
  • Designed for failure

Advantages of using microservices are:

  • A team can choose an any language for the service
  • Less risk in change
  • Partial Development
  • Independent Scaling
Monolith Microservices
Simple Complex
Whole Development Partial Development
No availability when other services failed Some availability when other services fail
Preserves modularity
Multiple platforms

What is a chaos monkey?

A chaos monkey is a tool that randomly stops services in the infrastructure during the data, while services are being monitored. Since Failure will happen in any disturbed services having a chaos monkey will force developers to anticipate how that failure would happen and how it will be handled. Since failure will happen in any distributed system telling a chaos monkey into an infrastructure will make people more aware of the fact that things will break by forcing it to happen, then monitoring and recovery can handle the event. This effects how to code is designed and written to become more robust. This is chaos engineering which is the discipline of experimenting on a software system in production in order to build confidence in the system’s capability to withstand turbulent and unexpected conditions.

Netflix’s chaos monkey repository on GitHub

What is Conway’s Law?

Conway’s law states that an “organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.” This is based on the reasoning that for a software module to function, multiple authors must communicate frequently with each other. Thus, the software interface structure of a system will reflect the social boundaries of the organization that produced it. In microservices, there is a lot of variation on how big the size of each team and the number of services to support should be.



IBM Cloud. (2019, Febuary 26). What are Microservices? Retrieved from Youtube:

Richardson, C. (n.d.). Retrieved from

Thoughtworks. (2015, January 31). Martin Fowler – Microservices. Retrieved from Youtube:

ThoughtWorks. (n.d.). Martin Fowler. Retrieved from

Wikipedia. (2020, February ). Conway’s law. Retrieved from Wikipedia:


The Crittenden Proposal

With 7 southern states already attempting to secede and form a new nation, Congress debated how to keep the Union together by law, coercion, or compromise.

Jordan CrittendenKentucky Senator John J. Crittenden, a member of the “Committee of Thirteen,” devised a compromise, a series of amendments to the Constitution in hopes of avoiding further secession threats. Essentially, it would guarantee slavery would remain in established states without government interference. It was later modified to the 36° 30’ parallel. Believing there is no compromise when it involves slavery, the proposal was denied by Lincoln stating it would set back all he had worked to achieve. As a result, the southern states proceeded to form an independent Confederate government.

One Nation, United yet Divided

With an election on the horizon, the United States is about to undertake yet another exercise rooted in American ideology. Nevertheless, recent events have exasperated a seemingly divided nation, leaving many to wonder if there is hope in restoring unity. In hindsight, American history is laced with many a fracture upon political, social, and economic lines. None as profound as those witnessed in the 19th century with sentiments so strong, they would lead to civil war.

A nation at its youth, colonial Americans lived a simple life during the Second Great Awakening. The nearly ten million were predominately English and Protestant. Rural communities dotted the eastern third of a country expanding as the Louisiana Purchase (1803) created an additional 828,000 sq. mi. of land. They were farmers, merchants, and artisans. Horses powered machinery. Women remained in the home, raising the children. Pre-industrial man was a self-sufficient man, upholding his freedom and sobriety, guided by a strong moral center with spiritual ties to salvation. What he could not produce, he imported.

Second Great Awakening
1839 Methodist camp meeting Stephen Hofer

The rivers were far-reaching but by no means efficient. Navigation was slow and the ability to deliver goods was untimely. American imports were costly but that changed around 1820. New advancements for railroads and steamboats facilitated travel and gained momentum as the democratic republic sought new ways to deliver a more progressive nation to the people.


President John Quincy Adams (1825-1829) initiated a federally funded project known as the Transportation Revolution. It transformed America’s infrastructure through improved roads, railways, and man-made canals. The Erie Canal (1825) created an artificial waterway between the Atlantic Ocean and the Great Lakes. In addition, the evolution of communication and the telegraph interconnected the population within seconds, as opposed to weeks. Mostly confined to New England, commerce expanded westward alongside these sprawling transportation hubs, building factories with modern, steam powered machinery along the way. Rural Americans relocated to the city as the demand for labor precipitated a full-scale American Industrial Revolution.

Into the 1830s, the notion of standardized precision of products through mass production was uniquely American. Most anything could be mass produced cheaper and better. Profit-driven industrialists used the factory system to build larger corporations that prospered as they capitalized upon productive employees possessing Protestant-Christian virtues in punctuality, reliability, and discipline. The once artisan became obsolete, now selling his craft as a mere laborer. Women entered the workforce. Factories demanded labor of all skill levels, moreover, created new and highly skilled occupations in its path such as machinists, millwrights, and engineers. A drive toward literacy and a new system of public education would also supply sorely needed skilled workers.

The flourishing economy attracted an influx of immigrants from Europe seeking opportunity. About 750,000 Germans, Irish, and Catholics arrived in the 1820s-1830s and another 4 million through the 1850s. The new capitalist society was a quick and difficult transition for many. Specialization of labor generated a system of class relations: master, journeyman, employer, and employee, often causing tension. Immigration and extreme demographic changes caused cultural friction. Nativism grew among Protestant-Christians. Yet pluralism abounded, creating the new face of a growing, consuming American industrial middle-class.


In contrast, the South thrived as state law protected a unique way of life based upon its agrarian tradition. There were no factories nor rapidly expanding cities. Southern culture consisted of a patriarchal society. Chivalry was embraced as a distinct hierarchy thrived, based upon race, gender, and the institution of slavery at its core.

Slavery was legal throughout colonial America but later abolished north of the Mason-Dixon Line with state or federal legislation such as the Northwest Ordinance (1787.) Looking westward, any boundary issues were solved when Congress set a dividing line in the Missouri Territory at 36° 30’, a resolution called the Missouri Compromise (1820.) Slavery remained in the south due to the lucrative nature of the business. Fertile land, ideal climate, and the availability of slaves from the African slave trade allowed southern farmers to fully equip their plantations for mass production of cotton and other crops. By law, this ruling class could continue to purchase humans to retain as property, affording them all rights of custody and servitude.

The institution of slavery was extremely vital to southern prosperity. A slave owner purchased males and females until the federal Abolition of the Slave Trade in 1807. To increase the workforce, he encouraged procreation. While slave marriages were outlawed, they still created familial relationships and produced offspring. Unfortunately, due to the harsh nature of the institution, family structure was often broken through resale and trade within the South. Breakdown of the family was just one issue that angered Abolitionists, a group that arose from the Second Great Awakening movement.

The South demonstrated a superiority in agriculture, contributing greatly to the flourishing American economy. Despite its “shortcomings,” Southerners were able to justify a superior culture and their reciprocal relationship to the Negro, who made this all possible. Yet they found no shame in human bondage. When slaves resisted or escaped, the South used the power of federally mandated fugitive slave laws. The Great Awakening prompted great criticism and opposition on both sides of the Mason-Dixon. Lack of “Yankee ingenuity” in the South was considered “dead weight” to the growing North and while they acknowledged state law, they answered to the call of a “higher law.” They established the Underground Railroad, resisted federal agents seeking runaways, and advocated for Personal Liberty Laws (1840s.)

To date, the democratic republic generally agreed upon American industry over foreign competition, centralized banking, modern infrastructure, economic growth, and mobility. By this time, Americans had already started moving west. The government assisted in clearing the way by purchase, compromise, war, and annexation. In addition, the Indian Removal Act of 1830 greatly reduced America’s true natives, confined to the government’s “less desirable” land. But onto the Great Plains and Pacific Coast they left in droves, leaving behind overcrowded cities, violence, poverty, and strained class relations.

America forged on with expansion. If more land was good, then “from sea to shining sea” must be better. Some suggested modeling American ideology. Then conformity would spread over time. Others felt it necessary to conquer. Fearing industrialist expansion was an effort to exert the power of the federal government, President Andrew Jackson vetoed the re-charter of the Second Bank of the United States in 1832. The backlash led to the creation of a two-party political structure, the Democratic Party and the Whig Party.

Andrew Jackson

Democratic President Andrew Jackson (1829-1837) and his followers believed in the voice of the common man and mobility of the culturally diverse, even though Jackson was responsible for the removal of native Americans via the Trail of Tears. They supported small, rural, un-intrusive state government, the same government that perpetuated slavery in the South. The Democrats upheld slavery and wanted it expanded with the acquisition of new territory.

The Whigs (or Patriots) upheld the values of native-born Protestants. They were the voice of education, social reform, temperance, and abolition. Whigs believed in the power of Congress and a strong federal government, federal bank, and protective tariff. They were the industrialists that perpetuated modernized infrastructure. They believed slavery was morally wrong and a (socially and economically) backwards institution. Whig member Abraham Lincoln compared slavery to a cancer that needed to be purged.

220px-James_Polk_restoredFulfilling his Manifest Destiny of 1845, President James K. Polk (1845-1849) garnered more land than any other American president. Starting with Texas, whom entered the U.S. as a slave state, the Jacksonian Polk waged Mexican war to supplement more land, which the Whigs opposed. As a result, a southern border was established and 500,000 sq. mi. of land annexed, including California. The British agreed to Polk’s acquisition of Oregon Territory which also expanded the U.S. east to the Rockies. Abolitionists viewed the expansion nothing more than a Democrats’ conspiracy to extend slavery. In 1846, Pennsylvania Congressman Wilmot Proviso attempted unsuccessfully to propose a ban to slavery expansion. Neutrality laws were greatly debated. Some southern states threatened to secede until the establishment of the Compromise of 1850, another congressional “solution” allowing states upon admission to choose by popular vote.

The Compromise upheld until the discovery of gold in the state of California hastened westward travel through midland America, specifically Kansas Territory and Nebraska Territory to the north. These territories began to populate as the nation proposed expansion of the railroad, connecting east to the west. To find favor with Southern legislature, Stephen A. Douglas of Illinois proposed dividing these territories, essentially overturning the 36° 30’ resolution. When it passed on May 20, 1854, the backlash shattered the Whig Party. As a result, the yet-to-organize Kansas formed two territorial governments and became the battleground for pro-slavery and free-soil sentiment. Once a platform for debate, the murder of a free-soil Kansas settler by a slavery supporter sparked a chain of violent events. In 1856, a pro-slavery mob invaded the town of Lawrence, Kansas, destroying businesses and burning the home of the free-soil governor. Two days later, Abolitionist John Brown led four of his sons to a pro-slavery settlement at Pottawatomie, Kansas. The men brutally dragged and beat 5 men to their death. The violence continued until its eventual “free” statehood admittance in 1861, thus coined the name “Bleeding Kansas.”

“Bleeding Kansas” instigated more violence that erupted the next few years in the name of slavery. In 1856, Massachusetts Senator was brutally beaten with a cane in the Senate Chamber after delivering his “Crime Against Kansas” speech. John Brown’s violence against slavery continued in 1856 at Harper’s Ferry, Virginia. A 36-hour standoff led to his capture and subsequent hanging, sending fear and outrage throughout the South. Southern Democrats, now weakened, looked to the government for protection from other violence. Without strength, they cannot protect slavery. Some feared the institution of slavery may be nearing its end.

Bleeding Kansas

With an impending election in 1860, several smaller parties such as the Free-Soil Party, the Know-Nothings, Nativists, and the American Party, united to form a strong Republican Party endorsing anti-slavery campaigns. A young Whig-turned Republican Abraham Lincoln surfaced as a viable candidate. Lincoln felt if the Constitution made no reference to slave ownership and the Constitution applied to every state, then America cannot continue to be ½ slave and ½ free. If a white person is allowed the opportunity to better himself, so should the black man. His rhetoric would continue through the 1858 debates. His integrity, moderation, and commitment to Republican ideology would win him the nomination and subsequent presidential victory.

The election of 1860 is considered a milestone in American politics. There was much fascination and excitement. The media uncovered a smear campaign full of character assassination, scandal,and, abuse of power. Yet the overwhelming issue led to a serious struggle and all-out war between two bi-partisan groups. Exactly 150 years later and another election on the horizon, character assassinations, scandal, and abuse of power still ring throughout bi-partisan groups as they work to deliver a better America. The struggle remains as the forum has changed in what some have called the new “Age of Impeachment,” leaving many to wonder if there ever will be unity. Maybe it is the struggle that keeps us in-check and makes our America so great.

1860 United States presidential election
1860 United States presidential election

Modeling, Analysis, and Design

What is a model?

A model is a way of expressing a software design in some form of abstract language or pictures. One person’s representation of an object might differ from someone else’s. Developers try to design different designs various of a model and decide which will be best for the final solution. An object modeling language such as UML is used to develop and express the software design. Designing a model can be challenging depending on how complex or how many features it may have. There might be dozens of models to choose from. An effective modeler needs to choose the appropriate ones.


What is Analysis?

Analysis is a process of discovery whose purpose is to understand the customer’s needs and requirement An Analysis must describe the functional specification about what precisely what the system must do to meet its requirements.

  • Requirements
  • Data Definitions
  • Decision tables
  • How a system should work
  • Identify classes and their relationship and behavior


What is Design?

Design is a blueprint to produce a solution to a problem, summarized by the requirements specification. It should always describe how the system is to perform its tasks to meet the specification. The design of algorithms takes in to account the details of each component, using a programming language or pseudo code.

Software Engineering Process Framework

Software engineering process framework activities are complemented by several umbrella activities. The umbrella activities of a software process are:

Software tracking and control

Using project management software during the lifespan of the project helps to monitor work on each module, forecasting the development time, estimating required human and technical resource hours, and calculating a critical path for the project.

Risk management

Risk is an event that may or may not occur. Risk management is the process for identifying, assessing, and prioritizing risks of different types which might endanger the assets and earning capacity of a business. Once a risk has been identified, the risk manager will create a plan to minimize or eliminate the impact of negative events. Many project managers recognize that risk management is important because achieving a project’s goal depends on planning, preparation, results, and achieving strategic goals.

Software quality assurance (SQA)

Quality is defined as the sum of the total characteristics of a software entity that bears on its ability to satisfy or implied needs. The purpose of software project quality management is to ensure that the project will satisfy the needs for which it was undertaken. Managing the quality of a software project and its development processes must meet the requirements and satisfy the user’s experience.

A software package must conform to the written requirements of the project’s processes and deliverables. When the project is “fit for use,” the product can be used as it was intended, ensuring the product will satisfy the needs for which it was developed for. In the end, the customer will decide if the quality of the software is acceptable or not.

Formal Technical Reviews (FTR)

After completing each module, it is good practice to review the completed module conducted by the technical staff. The purpose is to detect quality problems and suggest improvements before they propagate to the next activity. The technical staff will focus on the quality of the software from the customer viewpoint.


Project and product measures are used to assist the software team in delivering the required software. This helps to ensure the stakeholder’s requirements are met. Since software itself cannot be measured directly, it is possible to measure a project by direct and indirect factors.  Examples of direct measurement are cost, lines of code, size of software. An example of an indirect measurement would be the quality of software.

Software configuration management (SCM)

A set of activities designed to control changes by identifying parts of the system that are likely to change, establishing relationships, defining mechanisms for managing different versions of the project.

  • Revision control
  • Establishment of baselines.

Re-usability measurement

Re-usability measurement defines a criterion for product reuse. Backing up each part of the software project can be corrected, or any kind of support can be given to them later to update or upgrade the software.

Work product preparation and production (models, documents, logs, forms, lists)

Documents that include project planning and activities to create models, documents, logs, forms, and lists are carried get started here.

The Software Life Cycle

A framework is a Standard way to build and deploy applications. The Software Process Framework is a foundation of complete software engineering process. A generic process framework encompasses  five activities which are given below one by one:

Communication: Understand stakeholder intent and requirements.

Always prepare for meetings.  If the project domain is unfamiliar, do some research on the topic and provide the findings to other members of the team so everyone understands the topics discussed. Appoint a person in charge of every meeting, prior to each meeting. Face-to-face meetings work better, but sometimes not always an option. Take notes, meeting minutes, and document decisions. Distribute notes and decisions to attendees. Upload the documents to a share network resource, in case you need to review prior decisions and notes. Always stay focused throughout the meeting; modularize the discussion. Once something is agreed to, move on; move on even if not agreement, or no clarity (table item). Listen to the needs of the client, do not start to form answers until they are finished. Thus,  there is less of a change you might miss an important detail. Use professional body language: do not roll your eyes nor shake your head.


Project management software can be used during the lifespan of the project to monitor work on each module, forecasting the development time, estimate required human and technical resource hours, and calculating a critical path for the project. Over planning wasted time, and under planning creates chaos. Work with the stakeholder to understand what is and is not in the scope of project.  Involved stakeholders in discussions, when appropriate. They will help to set priorities, deadlines, which keeps the project on time and on budget. Iterate on the planning, don’t try to do everything at once. Break down tasks into small tasks and have plan how to complete each one. Incorporate risk management throughout each phase of the project; consider what risks are possible as you plan and be realistic.

Managing the quality of a software project and its development processes must meet the requirements and satisfy the user’s experience. Define how you will achieve quality and accommodate change. Track the project of each iteration of the project often;  make adjustments each time plan reviewed. Don’t leave team members out of planning.


Adapt models from another project to this one, making needed changes. Explain why you are building each model and build a useful model, not perfect model. Thus, the built model so they can be changed. Do not create models that are not needed, this may waste time and take the project out of scope. When presenting a model ask for feedback on the model by all team members. Encourage team members to share feeling about the state of the model. Ask them what they like and what they do not like.  Using structure charts during the implementation of a system will show users the program modules and the relationships among them. Structure charts consist of rectangle that represent the program modules, with arrows and other symbols that provide additional information. This helps when the analysts and programmers understand the purpose of each module during the designing and testing each module before being run on the entire system.


Programmer can use the principles of software engineering to manage and improve the quality of the finished system. Working from a specific design, a program will use the purposed programming language to transform the program logic into a program module which the rest of the group can work on simultaneously. To simplify the integration of system components and reduce code development time an integrated development environment (IDE) will make programming easier. Coding each program must be tested to make sure it will function correctly, after the programs will be tested in groups, and finally running the entire system.

Before you write first line of code, make sure you understand problem to be solved and the design principles. Choose the best programming language for this problem, then select programming environment. Select data structures to match design and keep conditional logic simple and minimal. Follow coding standards at your site for variable names, this will help ensure the code can be easily maintained in the future. Write self-documenting code to help yourself and other developments. Create unit tests to help reduce bug and problems in the software. Create nested loops so they can be easily tested. A successful test is one that discovers a bug.


Once delivered to the client see if the client can provide feedback on how the application is work. Using the basis of the clients feedback we modify the products for supply better product. To ensure a successful deployment, if possible, try to use a staged deployment model:

#1- Development: A local developer’s workstation.

#1 – Testing: An integration environment where developers merge changes to test that they work together to create a working application. Unit testing will need to be performed on an individual program or modules. This will help to identify and remove execution errors that could cause the program to terminate abnormally, and logic errors that could have been missed during the desk checking. Integration testing will need to occur on two or more programs that might depend on each other.

#3 – Staging: This is where tested changes are run against a production-equivalent environment with data to ensure the application will work properly

#4 – Production – The live production environment.

If an environment of stage is not kept up to date this can lead to out of data testing and possible incorrect environment factors causing promoting a model or application to fail, thus causing rollback  in development.