Intro to Computers

A von Neumann computer is a machine that can execute step-by-step instructions.

Most computers use a von Neumann architecture.

(images can be clicked to zoom and find more information)

Hardware vs. Software

A computer is a physical machine that can be tangibly held in a person’s hand, a.k.a. hardware.

Whereas, the step-by-step instructions of a program are information, a.k.a. data. Thoughts and information are intangible (meaning they can’t be physically touched), thus they are called software. The paper that information is written on can be touched, but paper and ink are not information— rather paper and ink are only the place where the written information is stored, i.e. a form of hardware. Evidence that information exists without being tangible, is that information such as voice or text traveling through the air via radio1 waves can’t be grabbed with the hand. It impossible to touch the thoughts in a person’s mind.


Wireless
          communication
Fig. 1 Wireless communication
1Electromagnetic waves is the technology that communicates information through the air between radios, cell phones, wireless internet connections, televisions, and other wireless devices.


Hardware Elements


Personal
          computer
Fig. 2 Personal computer
Every von Neumann computer includes the essential elements of hardware.
Intel CPU
Fig. 3 Intel CPU
The CPU is the brain of the computer and processes the step-by-step instructions of a software program. The CPU is an empty brain until it is given a program to run. The CPU (3) hides inside the computer case.

The program (and the data the program manipulates) is held in volatile and non-volatile storage.

Although slower than volatile storage, non-volatile storage does not lose information when the electrical power is removed or the computer is off. For speed, information is temporarily stored in volatile storage while it is being processed by the CPU and saved in non-volatile storage before the computer is switched off and hopefully before it loses power.

RAM
Fig. 4 RAM
An example of volatile storage is Random Access Memory, a.k.a. RAM. Normally the RAM (4) hides inside the computer case.
USB Flash
          drive
Fig. 5 USB Flash drive

Hard disk
          drive
Fig. 6 Hard disk drive
Non-volatile storage can be either re-writable or read-only. Examples of the former are the USB flash drive, Hard Disk Drive a.k.a. HDD, and Solid State Drive a.k.a. SSD. Examples of the latter are the Read Only Memory a.k.a. ROM and the compact disk drive (7) for the CD-ROM and DVD disks. Note most CD-ROM and DVD disks are only writable once, a.k.a. burning. Normally the HDD (8) or SSD and any ROM, hide inside the computer case. A SSD is more expensive, faster, and more durable than a HDD (it has no mechanically moving parts, a.k.a. solid state).

The I/O hardware transfers information into and out of the computer. The (wired or wireless) internet connection and touch screen are examples of hardware that do both input and output. Examples of input only devices are the keyboard (9), mouse (10), microphone, camera, scanner, and gyroscope inside the handheld smartphone or tablet that detects when the phone is rotated. Examples of output only devices are the non-touch screen (1), headphones, speakers, printer, and the vibration motor inside a cell phone.

Software Elements

Text, video, programs and every type of information are always processed by the hardware in binary digits form. Every possible kind of information can be represented in binary digits form— which are sentences containing an alphabet of only two digits— 0 and 1. For example, the information which is the letter ‘A’ is represented in the ASCII standard by the number 65— which is equivalent to 1000001 in binary digits. See the Binary Data lesson for elaboration.

CPU
          internals
Fig. 7 CPU internals
Hardware is essentially made of billions of microscopic binary switches, that each can in either the on (i.e. 1) or off (i.e. 0) position at any given time. The CPU and RAM use electronic transistor switches, the HDD’s spinning platter has magnetic switches, and the CD-ROM and DVD disks have optical switches.

The hardware is dumb, unaware what the 1s and 0s represent, and moves the binary data around as instructed by the steps in a software program— i.e. all the intelligence is contained in the program.

Even the instruction steps of a program (a.k.a. machine code instructions) are stored as binary digits in the hardware. For example, the instruction to copy some binary digits (e.g. 1000001 for letter ‘A’) to a storage location, is itself also a very long word of 0 and 1 digits (that is determined by the address numbers for the source and destination storage locations).

But humans don’t think in terms of cryptic words containing only 1s and 0s. We think at a much higher semantic level of instructions that express what the program should accomplish.

Thus higher-level programming languages were invented.

Magnified
          pixels
Fig. 8 Magnified pixels
When writing a program in a higher-level programming language, the programmer will only encounter the low-level cryptic binary form of data when manipulating information at the hardware level, e.g. turning on and off the tiny dots on the screen (a.k.a. pixels) that form the screen image. Look at your computer screen with a magnifying glass.


Why Learn Programming?

Robots have started replacing repetitive, uncreative jobs that humans currently do. Most uncreative jobs will probably not exist by 2033. This is creating a global financial crisis. Manual labor jobs transferred to Asia, but Asians won’t forever work for less salary than a robot.

Most creative human activities can be improved by the computer, because the computer amplifies the natural ability of the mind analogous to that a bicycle amplifies the speed, distance, and energy-efficiency that a human’s legs can travel.

IBM has a computer Watson that is much more accurate with lung diagnosis than a doctor. Hypothetically fast-food managers could be replaced with a computer program.

The Mythical Man Month shows that communication overload makes it very uncreative for a creative person to explain to a programmer every step, nuance, and change-of-mind w.r.t. fluid creative ideas, so that programmer can  implement on the computer. Thus the creative process works most efficiently for those who know how to program what they need. Creative professionals who only rely on programmers to write the tools they need, will be at a disadvantage compared to those who directly amplify their professions by also learning to write programs. The quantity and diversity of new or changed software features is always increasing— software is alive.

To be employable in any creative profession in the future, may require programming. Evidence that even writers benefit from knowing programming, is the much higher income of some who do.


Programming Languages

Programs are written in the vocabulary and grammar of a higher-level programming language, and are translated by running a program called a compiler, which converts to the low-level cryptic binary digits form that the hardware can run.

For now, it is only important to understand that for some programming languages the programmer must compile each (change to the) program before it can be run. Whereas, programs written in some other programming languages are automatically compiled (or interpretively “compiled”) Just In Time as they run. Typically, only statically typed languages must be precompiled.

There are many qualities for comparing types of programming languages.

It is not necessary for the novice to fully comprehend the remainder of this lesson. The following obtuse concepts should be mastered over time.

Declarative

The declarative property is weird, obtuse, and difficult to capture in a technically precise definition that remains general and not ambiguous, because it is a naive notion that we can declare the meaning (a.k.a semantics) of the program without incurring unintended side effects. There is an inherent tension between expression of meaning and avoidance of unintended effects, and this tension actually derives from the incompleteness theorems of programming and our universe.

It is oversimplification, technically imprecise, and often ambiguous to define declarative as “what to do" and imperative as “how to do”. An ambiguous case is the “what” is the “how” in a program that outputs a program— a compiler.

Evidently the unbounded recursion that makes a language Turing complete, is also analogously in the semantics— not only in the syntactical structure of evaluation (a.k.a. operational semantics). This is logically an example analogous to Gödel’s theorem— “any complete system of axioms is also inconsistent”. Ponder the contradictory weirdness of that quote! It is also an example that demonstrates how the expression of semantics does not have a provable bound, thus we can’t prove2 that a program (and analogously its semantics) halt a.k.a. the Halting theorem.

The incompleteness theorems derive from the fundamental nature of our universe, which as stated in the Second Law of Thermodynamics is “the entropy (a.k.a. the # of independent possibilities) is trending to maximum forever”. The coding and design of a program is never finished— it’s alive!— because it attempts to address a real world need, and the semantics of the real world are always changing and trending to more possibilities. Humans never stop discovering new things (including errors in programs ;-).

To precisely and technically capture this aforementioned desired notion within this weird universe that has no edge (ponder that! there is no “outside” of our universe), requires a terse but deceptively-not-simple definition which will sound incorrect until it is explained deeply.

Definition:
The declarative property is where there can exist only one possible set of statements that can express each specific modular semantic.

The imperative property3 is the dual, where semantics are inconsistent under composition and/or can be expressed with variations of sets of statements.
This definition of declarative is distinctively local in semantic scope, meaning that it requires that a modular semantic maintain its consistent meaning regardless where and how it’s instantiated and employed in global scope. Thus each declarative modular semantic should be intrinsically orthogonal to all possible others— and not an impossible (due to incompleteness theorems) global algorithm or model for witnessing consistency, which is also the point of “More Is Not Always Betterby Robert Harper, Professor of Computer Science at Carnegie Mellon University, one of the designers of Standard ML.

Examples of these modular declarative semantics include category theory functors e.g. the Applicative, nominal typing, namespaces, named fields, and w.r.t. to operational level of semantics then pure functional programming.

Thus well designed declarative languages can more clearly express meaning, albeit with some loss of generality in what can be expressed, yet a gain in what can be expressed with intrinsic consistency.
 
An example of the aforementioned definition is the set of formulas in the cells of a spreadsheet program— which are not expected to give the same meaning when moved to different column and row cells, i.e. cell identifiers changed. The cell identifiers are part of and not superfluous to the intended meaning. So each spreadsheet result is unique w.r.t. to the cell identifiers in a set of formulas. The consistent modular semantic in this case is use of cell identifiers as the input and output of pure functions for cells formulas (see below).

Hyper Text Markup Language a.k.a. HTML— the language for static web pages— is an example of a highly (but not perfectly3) declarative language that (at least before HTML 5) had no capability to express dynamic behavior. HTML is perhaps the easiest language to learn. For dynamic behavior, an imperative scripting language such as JavaScript was usually combined with HTML. HTML without JavaScript fits the declarative definition because each nominal type (i.e. the tags) maintains its consistent meaning under composition within the rules of the syntax.

A competing definition for declarative is the commutative and idempotent properties of the semantic statements, i.e. that statements can be reordered and duplicated without changing the meaning. For example, statements assigning values to named fields can be reordered and duplicated without changed the meaning of the program, if those names are modular w.r.t. to any implied order. Names sometimes imply an order, e.g. cell identifiers include their column and row position— moving a total on spreadsheet changes its meaning. Otherwise, these properties implicitly require global consistency of semantics. It is generally impossible to design the semantics of statements so they remain consistent if randomly ordered or duplicated, because order and duplication are intrinsic to semantics. For example, the statements “Foo exists” (or construction) and “Foo does not exist” (and destruction). If one considers random inconsistency endemical of the intended semantics, then one accepts this definition as general enough for the declarative property. In essence this definition is vacuous as a generalized definition because it attempts to make consistency orthogonal to semantics, i.e. to defy the fact that the universe of semantics is dynamically unbounded and can’t be captured in a global coherence paradigm.

Requiring the commutative and idempotent properties for the (structural evaluation order of the) lower-level operational semantics converts operational semantics to a declarative localized modular semantic, e.g. pure functional programming (including recursion instead of imperative loops). Then the operational order of the implementation details do not impact (i.e. spread globally into) the consistency of the higher-level semantics. For example, the order of evaluation of (and theoretically also the duplication of) the spreadsheet formulas doesn’t matter because the outputs are not copied to the inputs until after all outputs have been computed, i.e. analogous to pure functions.

C, Java, C++, C#, PHP, and JavaScript aren’t particularly declarative. Copute’s syntax and Python’s syntax are more declaratively coupled to intended results, i.e. consistent syntactical semantics that eliminate the extraneous so one can readily comprehend code after they’ve forgotten it. Copute and Haskell enforce determinism of the operational semantics and encourage don’t repeat yourself” (DRY), because they only allow the pure functional paradigm.


2 Even where we can prove the semantics of a program, e.g. with the language Coq, this is limited to the semantics that are expressed in the typing, and typing can never capture all of the semantics of a program— not even for languages that are not Turing complete, e.g. with HTML+CSS it is possible to express inconsistent combinations which thus have undefined semantics.

3 Many explanations incorrectly claim or imply that only imperative programming has syntactically ordered statements. I clarified this confusion between imperative and functional programming. For example, the order of HTML statements does not reduce the consistency of their meaning.

Turing-complete

Turing-complete programming languages are capable of every imaginable computation. The imperative language must support modifying data (at infinite possible storage locations), and unbounded recursion— i.e. the ability to invoke small programs (a.k.a. functions) which can invoke themselves ad infinitum.

HTML is not Turing-complete. HTML can be supplemented with a Turing-complete language such as JavaScript.

C, Java, C++, C#, JavaScript, Python, PHP, Haskell, and Copute are (comparable to) Turing-complete.

Typed

Statically typed programming languages enable the declaration of the set of rules (a.k.a. invariants) that each type of functions and data must obey. Consistent adherence to these rules throughout the program is checked by the compiler when the program is compiled, i.e. at compile-time. Untyped3 (a.k.a. dynamic) languages don’t check such rules at compile-time (rules other than those implicit in the language grammar).

Typing carries the tsuris of making sure all the checked rules compile without error. Sometimes it is impossible and typing must be discarded (a.k.a. cast to the Top type4) in a portion of the program. Ostensibly either typing or sufficient unit tests are necessary to achieve large scale programs and modularity. Typing can be more declarative, because it automatically checks opaque2 imperativity and unintended semantics w.r.t. to the rules declared.

C, Java, C++, C#, Haskell, and Copute are statically typed. JavaScript, Python, and PHP are not.


4Untyped really means unityped to the Top type (the ‘i’ is not a typo, ‘uni’ means one). The Top type is elaborated in a more advanced lesson and isn’t necessary to know now.

Object-oriented

Objected-oriented languages enable the declaration of subsets (i.e. a subtype with more rules and thus less members) for a preexisting set of rules (i.e. for a preexisting type). This enables preexisting programs to interopt with new types of data without modifying the preexisting program a requirement for S-modularity. For example, a program that operates on the fruit data type, will (without alteration) also operate on new subtypes of fruit such as banana, apple, and orange.

Subtyping enhances modularity, but with the tradeoff of less compositional degrees-of-freedom. Learning when to use and not use subtyping is the art of becoming a master programmer, but isn’t a prerequisite to get started with learning programming.

Java, C++, C#, and Copute are statically subtyped. JavaScript, Python and PHP are statically unityped3, thus not statically subtyped. C and Haskell are statically typed, but not statically subtyped.

Copyright © 2013, All Rights Reserved.