INTRODUCTION
Today, most people
don't need to know how a computer works. Most people can simply turn on a
computer or a mobile phone and point at some little graphical object on the
display, click a button or swipe a finger or two, and the computer does
something. An example would be to get weather information from the net
and display it. How to interact with a computer program is all the
average person needs to know.
But, since you are
going to learn how to write computer programs, you need to know a little bit
about how a computer works. Your job will be to instruct the computer to
do things.
proc-ess /
Noun:
A series of actions or
steps taken to achieve an end.
pro-ce-dure /
Noun:
A series of actions
conducted in a certain order.
al-go-rithm /
Noun:
An ordered set of
steps to solve a problem.
Basically,
writing software (computer programs) involves describing processes, procedures;
it involves the authoring of algorithms. Computer programming involves
developing lists of instructions - the source code representation of software The
stuff that these instructions manipulate are different types of objects, e.g.,
numbers, words, images, sounds, etc... Creating a computer program can be
like composing music, like designing a house, like creating lots of
stuff. It has been argued that in its current state it is an art,
not engineering.
An important reason to
consider learning about how to program a computer is that the concepts
underlying this will be valuable to you, regardless of whether or not you go on
to make a career out of it. One thing that you will learn quickly is that
a computer is very dumb, but obedient. It does exactly what you tell it
to do, which is not necessarily what you wanted. Programming will help
you learn the importance of clarity of expression.
A deep
understanding of programming, in particular the
notions of
successive decomposition as a mode of analysis
and debugging of
trial solutions, results in significant
educational
benefits in many domains of discourse,
including those
unrelated to computers and information
technology per se.
(Seymour Papert, in "Mindstorms")
It has often been
said that a person does not really
understand
something until he teaches it to someone else.
Actually a person
does not really understand something
until after
teaching it to a computer, i.e., express it
as an
algorithm."
(Donald Knuth, in "American
Mathematical Monthly," 81)
Computers have
proven immensely effective as aids to clear
thinking. Muddled and half-baked ideas have sometimes
survived for
centuries because luminaries have deluded
themselves as much
as their followers or because lesser
lights, fearing
ridicule, couldn't summon up the nerve to
admit that they
didn't know what the Master was talking
about. A test as
near foolproof as one could get of whether
you understand
something as well as you think is to express
it as a computer
program and then see if the program does
what it is supposed
to. Computers are not sycophants and
won't make
enthusiastic noises to ensure their promotion
or camouflage what
they don't know. What you get is what
you said.
(James
P. Hogan in "Mind Matters")
But, most of all, it
can be lots of fun! An associate once said to me "I can't believe
I'm paid so well for something I love to do."
Just what do
instructions a computer understands look like? And, what kinds of objects
do the instructions manipulate? By the end of this lesson you will be
able to answer these questions. But first let's try to write a program in
the English language.
Remember what I said
in the Introduction to this lesson?
Writing software,
computer programs, is a lot like
writing down the
steps it takes to do something.
Before we see what a
computer programming language looks like, let's use the English language to
describe how to do something as a series of steps. A common exercise that
really gets you thinking about what computer programming can be like is to
describe a process you are familiar with.
Describe how to make
a peanut butter and jelly sandwich.
Rather than write my
own version of this exercise, I searched the Internet for the words
"computer programming sandwich" using Google. One of the hits returned washttp://teachers.net/lessons/posts/2166.html. At the link, Deb Sweeney (Tamaqua Area
Middle School, Tamaqua, PA) described the problem as:
Objective: Students
will write specific and sequential steps
on how to make a peanut butter and jelly
sandwich.
Procedure: Students
will write a very detailed and step-by-step
paragraph on how to make a peanut butter and
jelly
sandwich for homework. The next day, the
students will
then input (read) their instructions to the
computer
(teacher). The teacher will then
"make" the programs,
being sure to do exactly what the students
said...
When this exercise is
directed by an experienced teacher or mentor it is excellent for demonstrating
how careful you need to be, how detailed you need to be, when writing a
computer program. A demonstration of this
exercise is available on YouTube.
Programming in a
natural language, say the full scope of the English language, seems like a very
difficult task. But, before moving on to languages we can write programs
in today, I want to leave on a high note. Click
here to read about how Stephen Wolfram sees programming in a natural language
happening.
A little less than
fifteen years ago, Mitchel Resnick and friends at MIT introduced a programming environment called Scratch. It provides a new approach to teaching
computer programming through a graphical user interface that eliminates the
possibilty of making certain mistakes common in text-based programming.
Programs are
constructed by connecting blocks, each representing some functionality
available in the system. Figure 1.1 shows a simple program that asks
the person running it for their name and then says hello. Color is used
for categories that the blocks belong to. Notice that the ask block
and the corresponding answer block are the same shade of
blue. The shapes of the blocks determine where they can be placed to form
an acceptable program. The rounded green join block fits
into a rounded hole in the violet say block.
One of Scratch's
strengths is the ease with which students can construct games and animated
simulations and stories. Another strength is the Scratch website itself which provides access to many
tutorials and a community of users with programs they've written. To
learn more about Scratch, visit the Scratch
Wiki.
Blocks-based
programming is a great way to get started. But, as the size of the
programs you want to write grows or you need a feature not available in
Scratch, it's time to switch to text-based programming.
Almost all of the
computer programming these days is done with high-level programming
languages. There are lots of them and some are quite old. COBOL,
FORTRAN, and Lisp were devised in the 1950s!!! As you will see,
high-level languages make it easier to describe the pieces of the program you
are creating. They help by letting you concentrate on what you are trying
to do rather than on how you represent it in a specific computer
architecture. They abstract away the specifics of the
microprocessor in your computer. And, all high-level languages come with
large sets of common stuff you need to do, called libraries.
In this introduction,
you will work with two computer programming languages: Logo and Java.
Logo comes from Bolt, Beranek & Newman (BBN) and Massachusetts Institute of
Technology (MIT). Seymour
Papert, a scientist at MIT's
Artificial Intelligence Laboratory, helped Wally Feurzeig at BBN design Logo in the 1970s.
More research of its use in educational settings exists than for any other
programming language.
Java is a fairly
recent programming language. It appeared in 1995 just as the Internet was
starting to get lots of attention. Java was invented by James Gosling,
working at Sun Microsystems. It's sort-of a medium-level language.
One of the big advantages of learning Java is that there is a lot of software
already written ( see:
Java Class Library)
which will help you write programs with elaborate graphical user interfaces
that communicate over the the Internet. You get to take advantage of
software that thousands of programmers have already written. Java is used
in a variety of applications, from mobile phones to massive Internet data
manipulation. You get to work with window objects, Internet connection
objects, database access objects and thousands of others. Java is the
language used to write Android apps.
So, why do these
lessons start with the Logo programming language?
No other computer
programming language has the depth of research as Logo, based on its use in
educational settings. It's roots are in the development of interactive
learning environments. Wally Feurzeig was researching the use of a
timeshared computer to improve teaching mathematical concepts while at BBN
(Bolt, Beranek, and Newman). The question he wanted to answer was whether
kids would embrace the new technology and learn using it. With some
success demonstrated using an existing programming language, Wally contracted
Seymour Papert to help with Logo's design. Seymour wrote the functional
specification for Logo. Daniel
Bobrow then wrote the
first Logo interpreter. Since these early days, hundreds of books and
research papers have been written about its use in the classroom. Cynthia
Solomon, who started MIT's
Logo Group with Dr. Papert, has put together a comprehensive website on
Logo: logothings.wikispaces.com.
I like using the Logo
language to teach introductory programming because it is very easy to
learn. The faster you get to write interesting computer programs the more
fun you will have. And... having fun is important! But do not let
Logo's simplicity fool you into thinking it is just a toy programming
language. Logo is a derivative of the Lisp programming language, a very
powerful language still used today to tackle some of the most advanced research
being performed. Brian
Harvey shows the power
of Logo in his Computer Science Logo Style series of
books. Volume
3: Beyond Programming covers
six college-level computer science topics with Logo.
Both Logo and Java
have the same sort of stuff needed to write computer programs. Each has
the ability to manipulate objects (for example, arithmetic functions for
working with numbers). Each lets you compare objects and do a variety of
things depending on the outcome of the comparison. Most importantly, they
let you definenamed procedures. Named procedures are lists of
built-in instructions and other named procedures. The abstraction of
naming stuff lets you write programs in a language you yourself define.
This is the stuff that programming is really all about, as you will see.
Just to give you a
feel for what programming is like in a high-level language, here's a program
that greets us, pretending to know English.
print [Hello world!]
This is one of the
simplest programs that can be written in most high-level languages. PRINT is
a command in Logo When it is performed, it takes
whatever follows it and displays it. The "Hello world" program
is famous; checkout
its description on Wikipedia by clicking here.
In addition to
commands, Logo has operators that output some sort of result.
Although it's a bit contrived, here is a program that displays the
product of a constant number (ten) and a random number in the range of zero
through fourteen.
print product 10
(random 15)
In this source code,
the PRINT command's input is the output of the PRODUCT operator. PRODUCT multiplies
whatever follows it by whatever follows that and outputs the result.
So, PRODUCT needs two inputs. RANDOM is
an operator that outputs a number that is greater than or equal to zero (0) and
less than the number following it. So, PRODUCT gets its
second input from the output of RANDOM.
Confusing?
Figure 1.2 shows a plumbing diagram, a
graphical representation of how all these procedures fit together.
Finally, here's a
snipet of advanced Logo source code, just to give you a feeling for what it
looks like. This is a procedure definition for selecting the maximum
number from a list of numbers.
to getMax :maxNum :numList
if empty?
:numList [output :maxNum]
if greater? (first
:numList) :maxNum [output getMax (first :numList) (butfirst :numList)]
output getMax
:maxNum (butfirst :numList)
end
Again, do not worry if
you do not understand exactly how this procedure works. It will be a
while before you will be writing anything like this. But, I want you to
see that the words that make up the program's instructions and the instructions
themselves are similar to English sentences, e.g., the first line and a half in
the procedure are similar to the sentences:
If the list of numbers
to process is empty then output the maximum number processed.
If the first number in
the list is greater than the maximum number processed so far then ...
So, a high-level
programming language is *sort-of* like English, just one step closer to what
the language a computer really understands looks like. Now let's move on
to what a computer's native language looks like when it is given a symbolic
representation.
One abstract layer
above a computer's native language is assembler language. In assembler
language, everything is given human-friendly symbolic names. The
programmer works with operations that the microprocessor knows how to do, with
each given a symbolic name. Objects in the microprocessor and addresses
of stuff in the computer's memory can also be given meaningful names.
This is actually a very big step over what a computer understands, but still tedious
for writing a large program. Assembler language instructions still have a
place for little snipits of software that need to interact directly with the
microprocessor and/or those that are executed many, many, many times.
Table 1.1 is an
example of DEC PDP-10 assembler language, a function that returns the largest
integer in a group of them, named NUMARY. The group contains NUMNUM
members.
OpCode
|
Register
|
Memory
Address |
Index
Register |
Comment
|
|
GETMAX:
|
MOVSI
|
T1
|
400000
|
; init T1 to
smallest integer
|
|
MOVE
|
T2
|
NUMNUM
|
; get number
of array members
|
||
GTMAX2:
|
SOJL
|
T2
|
[POPJ
P,]
|
; decr idx,
if -1 then done
|
|
CAMG
|
T1
|
NUMARY
|
(T2)
|
; skip if T1
> array member
|
|
JRST
|
GTMAX2
|
; continue
with next number
|
|||
MOVE
|
T1
|
NUMARY
|
(T2)
|
; T1 gets new
max number
|
|
JRST
|
GTMAX2
|
; continue
with next number
|
|||
Table 1.1
|
Hopefully this gives
you feel for how primitive computer instruction sets are. I'm not going
to go into the details of every instruction. If you want to go through it
in detail on your own, the
PDP-10 Machine Language is detailed here.
A few points I want to
expose you to are the general kinds of things being done.
1. moving objects (numbers) into the computer's
registers - very fast temporary storage,
2. decrementing the value in a register,
3. comparing the contents of a register to some
value in the computer's memory, and
4. transfering control to an instruction that's
not in the standard sequential order - down the page.
So, as you've seen,
higher-level programming languages provide similar functionality and in a form
that is closer to the English language.
But there is a problem
with assembler language - it is unique for every computer architecture.
Although most deskside and notebook computers these days use the Intel
architecture, this is only recently the case. And... a variety of
computer architectures are commonly used in game systems, smart phones,
tablets, automobiles, appliances, etc...
Ok, we are almost at a
point where I can show you machine language, the *native* language of a
computer. But for you to understand it, I'm going to have to explain how
everything is represented in a computer.
Your computer
successfully creates the illusion that it
contains
photographs, letters, songs, and movies. All it
really contains is
bits, lots of them, patterned in ways
you can't see. Your computer was designed to store just
bits - all the
files and folders and different kinds of
data are illusions
created by computer programmers.
(Hal Abelson,
Ken Ledeen, Harry Lewis, in "Blown to Bits")
Basically, computer
instructions perform operations on groups of bits. A bit is either on or
off, like a lightbulb. Figure 1.3_a shows an open switch and a
lightbulb that is off - just like a transistor in a computer represents a bit
with the value: zero. Figure 1.3_b shows the switch in the closed
position and the lightbulb is on, again just like a transistor in a computer
representing a bit with the value: one.
Figure 1.3_a
|
Figure 1.3_b
|
A microprocessor,
which is the heart of a computer, is very primitive but very fast. It
takes groups of bits and moves around their contents, adds pairs of groups of
bits together, subtracts one group of bits from another, compares a pair of
groups, etc... - that sort of stuff.
Inside a
microprocessor, at a very low level, everything is simply a bunch of switches,
also known as bits - things that are either on or off! Time to expand on
how this is done; first let's explore how groups of bits can be used to form
numbers.
There are only 10
different kinds of people in the world:
those who know
binary and those who don't.
- Anonymous
Computers are full of
zillions of bits that are either on or off. The way we talk about the
value of a bit in the electical engineering and computer science communities is
first as a logical value (true if on, false if
off) and secondly as a binary
number (1 if the bit is
on and 0 if it's off). Most bits in a computer are manipulated in groups,
so we humans need a way to describe groups of bits, things/objects a computer
manipulates. Today, bits are most often grouped in quantities of 8, 16,
32, and 64.
Think about how you
write down sequential numbers starting with zero: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, etc... Our decimal number system has ten symbols. In this
sequential series, when we ran out of symbols, we combined them. You
learned how to do this so long ago, in grade school, that today you just
naturally think in terms of single digit numbers, then tens, hundreds,
thousands, etc... The decimal number 1234 is one thousand, two hundreds,
three tens, and four units.
So, how does the binary
number system used inside computers work?
Well, with only two
symbols, we would write the same sequential numbers as above: 0, 1, 10, 11,
100, 101, 110, 111, 1000, 1001, 1010, 1011. The decimal number 1234 in
binary is 10011010010.
Since even reasonable
numbers that we use all the time make for very long binary numbers, the bits
are grouped in 3s and 4s which are simple to convert into numbers in the octal
and hexadecimal number systems. For octal, we group three bits
together. Take the binary equivilent of decimal 1234, 10011010010, and
put spaces in between each group of three bits - starting at the right and
going left.
10011010010
= 10 011 010 010
Now use the symbols 0,
1, 2, 3, 4, 5, 6, and 7 (eight symbols, so OCTAL) to replace each group.
10 011
010 010 = 2 3 2 2 = 2322
The octal
representations of the binary patterns are certainly easier to read, write, and
remember than the binary counterparts. An even more compact representation
can be achieved by grouping the bits in chunks of four and converting these to
hexNumerals.
When you group four
bits together and use sixteen symbols (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, A, B,
C, D, E, and F) as their abbreviations, you have a hexadecimal representation.
10011010010
= 100 1101 0010 = 4D2
As you continue to
explore how computers work, you'll hear more about numbers expressed in octal
and hex; these are just more manageable representations of binary information
- the digital world.
Table 1.2 compares the
decimal, binary, octal, and hexadecimal number systems.
in
Binary |
in
Octal |
in
Hex |
|
1
|
1
|
1
|
1
|
2
|
10
|
2
|
2
|
3
|
11
|
3
|
3
|
4
|
100
|
4
|
4
|
5
|
101
|
5
|
5
|
6
|
110
|
6
|
6
|
7
|
111
|
7
|
7
|
8
|
1000
|
10
|
8
|
9
|
1001
|
11
|
9
|
10
|
1010
|
12
|
A
|
11
|
1011
|
13
|
B
|
12
|
1100
|
14
|
C
|
13
|
1101
|
15
|
D
|
14
|
1110
|
16
|
E
|
15
|
1111
|
17
|
F
|
16
|
10000
|
20
|
10
|
Table 1.2
|
So, if the most common
groupings of bits in a computer are 8, 16, 32, and 64, what kinds of numbers
can these groups represent?
A group of eight bits
has binary values 00000000 through 11111111, or expressed in decimal 0 through
255. A group of sixteen bits has binary values 0000000000000000 through
1111111111111111, or decimal 0 through 65535. I'm not going to type in
binary representations for groups of 32 and 64 bits. The range of decimal
values for a group of 32 bits is 0 through 4,294,967,295. The range of
decimal values for a group of 64 bits is 0 through 18,446,744,073,709,551,615
- or almost eighteen and a half quintillion.
But wait... these
numbers are all positive (Whole Numbers). If we are going to allow for
subtraction operations on numbers, which can result in negative numbers, we
need Integers. Modern computers use one bit in each of the groups to
represent the sign (positive or negative) when the groups are used to represent
integers. Table 1.3 shows the range of numbers that
can be represented with groups of 8, 16, 32, and 64 bits.
Unsigned Maximum Value
|
Signed Minimum Value
|
Signed Maximum Value
|
|
8
|
255
|
-128
|
127
|
16
|
65535
|
-32768
|
32767
|
32
|
4,294,967,295
|
-2,147,483,648
|
2,147,483,647
|
64
|
18,446,744,073,709,551,615
|
-9,223,372,036,854,775,808
|
9,223,372,036,854,775,807
|
Table 1.3
|
That's about as deep
as I want to get into the representation of numbers in computers and the
binary, octal and hexadecimal number systems. Yes, computers have
division operators but I am not going to cover numbers that include fractional
parts, i.e., the "rational" and "irrational" numbers
due to the complexity of their implementations. If you want to read more,
I googled and found what looks like a good place for you to read more.
Start at All
About Circuits - Systems of numerationand read through it and continue on for a few more web pages in
the series. The
link to binary number at
the start of this section points to a Wikipedia page that also will give you
additional depth including the history of the binary number system.
Ok, so numbers are
simply groups of bits. What other objects will the computer's
instructions manipulate? How about the symbols that make up an alphabet?
It should come as no
surprise that symbols that make up alphabets are just numbers, groups of bits,
too. But how do we know which numbers are used to represent which
symbols, or characters as I'm going to call them from this point on?
It's all about
standards. In these lessons, we will use the American Standard Code for
Information Interchange (ASCII) standard. It is so ubiquious that it even
has its own web page, www.asciitable.com.
Let's walk through a
couple of examples, entries in the table. Here are some characters, their
decimal value and their binary value which is then transformed into an octal
number.
Uppercase 'A' =
decimal 65 = binary 01000001 = 01 000 001 = octal 101
Uppercase 'Z' =
decimal 90 = binary 01011010 = 01 011 010 = octal 132
The digit '1' =
decimal 49 = binary 00110001 = 00 110 001 = octal 061
Table 1.4 is a small
slice from the full ASCII character set, just enough to give you a flavor of
its organization.
in
Binary |
in
Octal |
in
Decimal |
in
Hex |
|
space
|
00100000
|
040
|
32
|
20
|
(
|
00101000
|
050
|
40
|
28
|
)
|
00101001
|
051
|
41
|
29
|
*
|
00101010
|
052
|
42
|
2A
|
0
|
00110000
|
060
|
48
|
30
|
1
|
00110001
|
061
|
49
|
31
|
2
|
00110010
|
062
|
50
|
32
|
9
|
00111001
|
071
|
57
|
39
|
A
|
01000001
|
101
|
65
|
41
|
B
|
01000010
|
102
|
66
|
42
|
C
|
01000011
|
103
|
67
|
43
|
Z
|
01011010
|
132
|
90
|
5A
|
a
|
01100001
|
141
|
97
|
61
|
b
|
01100010
|
142
|
98
|
62
|
c
|
01100011
|
143
|
99
|
63
|
z
|
01111010
|
172
|
122
|
7A
|
Table 1.4
|
Here's a little
trivia, from days long past when computers were so slow (compared with today).
·
Look closely at the
binary representations of uppercase and lowercase letters. You can
convert uppercase to lowercase by turning on a single bit. Or clearing
the bit converts lowercase to uppercase.
·
Clearing two bits will
convert an ASCII digit to its numeric value. Setting the same bits
converts a number in the range 0...9 into its ASCII character representation.
Take a moment to see
if you understand the explanation of binary numbers.
HERE is a jLogo program that converts an 8-bit
byte into numbers and ASCII characters. Try it out! .
The image on your
computer's display (actually all digital stuff) consists of a bunch of colored
points called pixels. A pixel is an object. It has a color and a
position (its coordinates) which consists of the row and column it is at.
Figure 1.4 shows an artist's rendition, a magnification of a display with
a circle drawn in yellow. The tiny black dots are the pixels and the big
yellow dots are the pixels that have been colored.
As an example, to
display a thin vertical line, the color values of a column of pixels are set to
the desired color of the line. If you want a thicker vertical line, you
set the color values of the pixels of a group of consecutive columns to the
desired color. Figure 1.5 shows a red line that's a single pixel wide and
an orange line that's three pixels wide. The orange line is actually a
very thin rectangle.
So, the location of
each pixel is obviously specified by a pair of numbers; what about the pixel's
color?
Well...
a pixel's color is also specified as numbers, three of them, called RGB (Red,
Green, Blue) values. Play with the following JavaScript program which
lets you see what number values generate which colors. What color do you
get if you set red to 170, green to 85, and blue to 255? What's the RGB
value for your favorite color?
Color Numbers
Tool
|
|
Red Value:
|
|
Green Value:
|
|
Blue Value:
|
|
Decimal Hexadecimal
|
|
So, just as groups of
bits represent numbers and symbols, they are used to form pixels.
Optional exploration: Why red, green, and blue? Why
these colors?
There are many good explanations on the
Internet. If you are interested, search the net or checkout the Wikipedia
entry for color
vision.
Ok... I've
exposed you to a variety of objects that you commonly see when you are using a
computer, things that can be manipulated with instructions in a computer
programming language. Now let's move on to the computer's instructions,
one more thing that is just a bunch of bits!
So, all a computer has
in it is bits. You've seen how they are used to represent stuff, pixels,
numbers and characters. I've mentioned that computers perform operations
on the bits, like move them around, add pairs of them together, etc...
One final obvious question is: how are instructions that a computer performs
represented?
Well, if you
instructed a computer in its native language (machine language), you would have
to write instructions in the form of (yes, once again) binary
numbers. This is very,
VERY hard to do. Although the pioneers of computer science did this, no
one does this these days.
Just to give you
something to look at, just to compare, Table 1.5 shows what the assembler
language program in Table 1.1 could look like assuming that the
machine instructions are loaded into memory at addresses 100 through 107.
Also, the group of numbers starts at memory address 111 and the size of the
group is in memory address 110.
OpCode
|
Register
|
Memory
Address |
Index
Register |
|
100
|
205
|
1
|
400000
|
|
101
|
200
|
2
|
110
|
|
102
|
361
|
2
|
107
|
|
103
|
317
|
1
|
111
|
2
|
104
|
254
|
102
|
||
105
|
200
|
1
|
111
|
2
|
106
|
254
|
102
|
||
107
|
263
|
17
|
||
Address
|
Value
|
|||
110
|
67
|
|||
111
|
47316
|
|||
. . .
|
||||
177
|
2751
|
|||
Table 1.5
|
A detailed explanation
of any computer's instruction set is beyond what can be presented here. I
just wanted you to see how the symbolic information in assembler language
programs needs to be converted to numbers (bits) before a computer can perform
it.
If you really want
more details now, here
is a side lesson from one of my favorite introductory computer science books:
The Computer Continuum.
No introduction to
computer programming would be complete without at least mentioning debugging.
The term refers to the discovery and correction of mistakes in computer
programs. The computer is doing what you instructed it to do, not what
you meant it to do. If you enjoy puzzles, there's a good chance you will
find the process of debugging an interesting challenge.
The origin of the term
came from a bug (a moth) found in a relay of a computer in 1947, by Admiral Grace Murray
Hopper. She found why
her program was not working.
Figure 1.6
Debugging is a
process. The good news for us is that mistakes in introductory level
programs are not that hard to diagnose and fix. It's basically narrowing
in on the instruction, or two, that are not doing what you intended.
Steps you take are like solving Sudoku or Mastermind puzzles.
Debugging a program
can be done in steps that match the Scientific Method.
1. Observation,
2. Hypothesize,
3. Make predictions, and
4. Test
In the observation
step, you think hard about what is happening versus what you expected to be
happening. An example is a program that is drawing something, but the
jumble of lines you see on the display does not look right. Or, your
program has a button that doesn't do anything when you click on it. As a
programmer, you approach these bugs like the legendary Sherlock Holmes
approached his cases. Review the program you've written and ask questions
like: "How could these instructions produce what is happening?"
Observation should
lead you to the point where you can make a hypothesis about the bad
behavior. For the drawing not coming out right, a hypothesis might be
something like: "If computing the metrics (orientation, length, ...) of
the first line is off, that might produce what I'm seeing." For a
button not working, a hypothesis could be: "If the mouse's location when
it is clicked is computed wrong, the program will ignore the click."
The hypotheses made are often based on intuition. This is because usually
the person debugging either wrote the program or at least modified it.
Given a hypothesis,
the next step is to figure out how to test it by predicting what should be
happen if the hypothesis it correct. Continuing with the example of the
misbehaving drawing program, let's say that you notice some source code that
might not produce proper results if what it is given is not in the bounds
expected. This code could produce strange results. This is a
prediction.
Finally, either the
program is modified or a debugging feature in the programming environment is
used to test the prediction. Modification of the program can be the
addition of instructions that print stuff on the display. Most
programming environments include debugging features, like tracing or
setting breakpoints to suspend a program to examine its internal state.
Either way, the programmer gathers more information about what the program is
actually doing, which is, in the case of bugs, not what you expect.
Testing, even if it
does not provide an answer, gets you additional information that can be used to
repeat the process. You narrow your exploration until you find your
mistake.
Ok... since you are
reading this, accessing this web page on the net, you have Google and it takes
a fraction of a second to answer this question.
Ada Lovelace,
was an English mathematician and
writer chiefly
known for her work on Charles Babbage's
early mechanical
general-purpose computer, the Analytical
Engine. Her notes on the engine include what is
recognised
as the first
algorithm intended to be carried out by a
machine. Because
of this, she is often described as the
world's first
computer programmer.
a
name="summary">
SUMMARY
Computer programming
is composing/authoring of a process/procedure for doing something, the source
code representation of algorithms - in great detail.
proc-ess /
Noun:
A series of actions or
steps taken to achieve an end.
pro-ce-dure /
Noun:
A series of actions
conducted in a certain order.
al-go-rithm /
Noun:
An ordered set of
steps to solve a problem.
Since computers do not
understand English and it would be impossible for a human to write a large
program as a series of binary numbers that the computer can understand, we need
something in between. High-level programming languages currently fit in this
category. Given a programming language that you have chosen, you then
follow its rules for composing statements (or expressions) that instruct the
computer to do what you want.
Because a computer is
simply a very fast manipulator of bits (ones and zeros), through the the power
of abstraction, computer scientists have layered levels of object
representation and functionality, one on top of another. We have now been
working on refining and extending these layers for over half of a
century. Table 1.6 should give you a feel for where we stand with
computer programming languages today.
In the next lesson
you'll start to write programs in Logo!
High-Level Language
Examples: Logo, Python |
Low-Level Language
Example: C |
Assembler Language
Example: Intel X86 |
Machine Language
Example: Intel X86 |
Table 1.6
|
facebook:http://adf.ly/1biS52
blogger:http://adf.ly/1biRxM
youtube:http://adf.ly/1biRbc
twitter:http://adf.ly/1bgfVg
instagram:http://adf.ly/1biS6S
reddit:http://adf.ly/1biS7z
google+:http://adf.ly/1biSD5
No comments:
Post a Comment