Book Club – Chapter 1 – The 8086 8088 Primer

In this week’s book club, we review Chapter 1 of The 8086 8088 Primer Second Edition, An Introduction to Their Architecture, System Design, and Programming by Stephen P. Morse.

You can view the book online at Mr. Morse’s website.

A heavily diagrammed book intended for computer novices and professionals about the beginnings of computer chips and the creation of the Intel 8086 and 8088 processors. The book is organized into three topics, Architecture, System Design, and Programming. It describes each, broken down into detailed explanations. 

This chapter describes the basics of computing systems, explores the various pieces and parts that make up computers, and introduces how computers and processors evolved over time. It also goes over how computers and chips are used and provides a brief insight into the history of computers.

Additional resources may be required to further understand certain concepts. References are included at the end.

Chapter summary

  • Basics of a computing system
    • obtain data from an input 
    • process data
    • deliver an output
  • Programs, instructions
  • Control units
  • Data types
  • Bits, numbers, characters
in this weeks book club we examine early computer systems
Early depiction of a computer system found in the book.

Chapter 1: At the very onset of the book The 8086 8088 Primer, dives into discussing the very basics of a computer system, its role, and how they operate. Going on to state that computers obtain data from input devices. Process the data provided and deliver outputs. The processing is done via a computer program. Computer programs are instructions stored in a program area or particular container for that type of information. When the programs are called and tasks are requested to be carried out, the operations are handled by a control unit device.

Control units are like command centers. They get the instructions, take a peek at them and decipher what needs to happen. Once the tasks requested have been determined and the necessary elements are considered, signals are sent to the taskmasters to ensure that whatever is asked of them is carried out. The information is sent back to the control unit, which moves data between devices. Control units can handle the results while arithmetic devices take care of the computations.

Information related to the computations is stored in memory. At this stage of the process, data is stored locally, meaning it is temporarily stored for the computation duration. At which point information gets passed back and forth between devices. A basic example.

  • Addition
    • Control unit says add
    • A signal is sent to the program
    • Program gets request
    • Sends instructs to control unit
    • The control unit says, ok, I can do this
    • Recognizes “add” task
    • Sends alerts to the data area
    • Data area receives “add” task
    • Adds whatever it’s gotta add
    • Sends back the results
in this weeks book club we discuss how computer handle operational tasks
Diagram from the book showing how operations are handled.

Programs and Data areas are similar because they both have memory stores, hold blocks of data, and handle lots of simultaneous tasks. However, program areas don’t typically change. They have in the past, but this was not ideal. The two types of memory are ROM or read-only memory and RAM or random access memory. When we think of programs, think read-only, like IKEA instructions. 

Memory banks consist of sequential locations. Similar to a block in a blockchain or the distant memory of an ex tucked away in mind. Each area contains unique addresses. 😉. The address’s contents are made up of bits. Bits are binary digits, 0’s, and 1’s. For computers to keep track of what is going on, it is done with flags. Status flags record information about results generated. While control flags control the operations of a computer. Ports are the doors data pass through transcending devices.

The contents of memory consist of instructions or data. These Instructions or data sets are sequences of bits or instruction formats that vary from manufacturer to manufacturer. Data can be either numbers or not numbers.

Numbers

As humans were accustomed to base-10 representations of numbers. Kind of like our 5 fingers on each hand. But, computers don’t have hands; they count with voltages. They’re limited to two voltage levels to remain simple, reliable, and efficient. Voltage Yes or No, lol. Seriously. Numbers in that context are represented by computers as sequences of bits. The number of bits determines the allowed range of values stored.

This means that certain sets of bits are available from 8 to 256 in the modern day. Each of these bits can store a minimum and maximum amount of information. The formula for an 8-bit memory store could look like this:

  • Min value -(2^7) = -128
  • Max value 2^7 – 1 = 128
from this weeks book club we see the avail range of bits
Chart depicting the different bit sizes available.

For those that don’t know maths well, namely me, the caret symbol ^ represents the power of a number. As in example 2 to the 7th power, 2^7, means 2 multiplied by itself 7 times (n7 = n × n × n × n × n × n × n). This was known as the second sursolid by Robert Recorde, a 16th-century Welsh physician, mathematician, and writer.

Because there are different bits, programmers can choose the correct data type. This helps to minimize waste by avoiding Underflow or overflow. Overflow is when a positive value is more than the max stored. Underflow is when a negative value is less than the min that can be stored. Choosing the right bit amount also makes the program more efficient. Bits are grouped four at a time and represented by a single character known as hexadecimal systems.

Numbers can also be represented as assigned numbers that allow negative numbers to be included in computer programming logic. This was easiest by placing a sign in the left bit closest to the number known as sign-magnitude. This doesn’t work when performing arithmetic as it breaks the logic. Negative numbers needed to be represented differently. 

A mapping method known as two’s complement was instilled to better represent signed numbers. Two’s complement is a method by which we can perform arithmetic and get expected results when using signed numbers. This is done by matching a signed number to the specific bit code that represents it.

When we want to extend beyond data sets, we can use sign extending to scale from a lower bit count to a higher bit count.

Characters

Characters like numbers are also sequences of bits. The minimum amount consists of 26 letters and 10 digits or 36 characters. Additionally, uppercase, lowercase, and special characters add another 36 characters. Bringing the total number of characters to the max amount a 6-bit number can have, 64. Thanks to things like the ASCII (American Standard Code for Information Interchange), which uses 7-bit encoding, characters can be wrapped in 8 bits. At the same time, ASCII is often reserved in the final bit or location in a memory bank to store special characters and check against the rest.

Stacks

Stacks, also known as pushdown lists or last-in-first-out ques, are literal stacks of anything. These stacks are made up of subroutines or procedures. Those subroutines are smaller chunked tasks from a more extensive process. This is done to divide workloads and be more efficient in processing. When done, subroutines store local information about the functions and return data to the control unit or return address. Stack pointers are like addresses of the last new piece of info in a stack. Pushing is when further information is added to the stack and updated. Popping is when a bit of memory is retrieved from the stack and updated.

History of computers

In the 1950s, electronic devices consisted of vacuum tubes. The First-generation of computers was born then. Consisting of the IBM 650 and 704. As time went on and technology advanced solid-state devices replaced vacuum tubes birthing second-generation computers such as the IBM 7090 and Burroughs B5500. During the 1960s, many devices were combined into one component, and the IC or integrated circuit was born.

Pluggable IC’s became known as chips. Third-generation computers such as IBM 360, GE 635, and Burroughs B6700 made their way into markets. 

During the 1970s, there was an advance in technology. All integrated-circuit technology was beginning to be put into one chip. As a result, the Intel 4004 and 8008 coined the term “computer-on-a-chip.” This was a significant breakthrough as the early vacuum computers were in the millions. In contrast, the new chip computers cost less than ten dollars.

This computer-on-a-chip, also known as microcomputers and microprocessors, sounds the same but is quite different. Microcomputers are an entire computer system consisting of a microprocessor, memory store, and input/output devices. While microprocessors are a single chip that consists of control units, arithmetic, and logic. Later, this evolved into a single-chip computer where an entire computer was fit on one Intel 8048.

Microprocessors were introduced with the Intel 4004/8008. The 4004 was created for a calculator, and the 8008 was a computer terminal.

From 1974 on, the 8008 became the 8080 and is effectively-known as a second-gen microprocessor. It was the first designed for many uses and became an industry standard. At this time, lots of competitors came to market, most notably Zilog. It wasn’t until 1979 when Intel introduced 8088, that third-generation microprocessors were born out of the need for more data to be handled, transferred, and stored.

Additional Sources:

Published
Categorized as Books

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.