WOLFRAM NOTEBOOK

Exploring Digital Information Technologies: Lecture 1- Part 2

The Landscape—Information and Computation

Encoding Digital Information

Picturecourtesy:https://pxhere.com/en/photo/1458883
Digital information is data encoded into symbols.
Most common form of digital data is the Binary Digit or Bit (it stores a teeny bit of information).
A bit can either be 0 or 1.

How is information coded?

Question: What is your favorite color?
Answers: Purple, Blue, Orange, Sage Green
The answers are information about us.
Here are a few colors:
Out[]=
We can use a number to represent each color.
There are 9 colors, so the numbers 0-8 can be used:
Out[]=
0
,1
,2
,3
,4
,5
,6
,7
,8
However with bits you have only 0 and 1 to encode all the information. Why just 0 and 1?

Information is Stored Physically

How do we store information?

In a digital system, information is stored using physical quantities such as:
  • voltage,
  • crystal structure, or
  • magnetic field.
  • Storing Information Physically

    Physical storage systems typically have two states: off or on OR which we can call 0 and 1.
    That gives us a binary digit, or bit.

    How do we measure information?

    Information is measured by how many bits are needed to store it.

    How is information coded into bits?

    Out[]=
    0
    ,1
    ,2
    ,3
    ,4
    ,5
    ,6
    ,7
    ,8
    How many bits would we need to store 9 different numbers?

    Let’s start small

    1 Bit

    With 1 bit (0 or 1) we can store 2 numbers (2 pieces of information):
    Out[]=
    0 -> 0
    1 -> 1

    2 Bits

    With 2 bits we can store 4 numbers (or colors or names or 4 of anything):
    Out[]=
    00 ->
    0

    01 ->
    1

    10 ->
    2

    11 ->
    3
    00 ->

    01 ->

    10 ->

    11 ->
    00 -> Mickey
    01 -> Minnie
    10 -> Donald
    11 -> Daisy

    3 Bits and 4 Bits

    With 3 bits we can store 8 numbers:
    Out[]=
    000 -> 0
    001 -> 1
    010 -> 2
    011 -> 3
    100 -> 4
    101 -> 5
    110 -> 6
    111 -> 7
    We need 1 more bit to get our 9th number in:
    Out[]=
    0000 -> 0
    0001 -> 1
    0010 -> 2
    0011 -> 3
    0100 -> 4
    0101 -> 5
    0110 -> 6
    0111 -> 7
    1000 -> 8

    From Bits to Decimal Numbers and Back

    How many different things can we represent with “n” bits?

    How many different things can 1 bit represent?
    1 bit can be either a 0 or 1. So 2 things
    How many different things can 2 bits represent?
    With 2 bits we have 4 patterns: 00, 01, 10, 11
    In[]:=
    2^2
    Out[]=
    4
    How many different things can 3 bits represent?
    In[]:=
    2^3
    Out[]=
    8
    How many different things can 4 bits represent?
    In[]:=
    2^4
    How many different things can 31 bits represent?
    How many different things can n bits represent?

    How many bits do we need to represent “n” different things?

    Mathematical formula to figure out how many bits would be needed to represent a specific decimal number:
    To represent 4 different things:
    To represent 9 different things:
    To represent the 25 students in this class:
    The highest binary number with 5 bits is 11111.
    So 5 bits will allow us to represent up to 35 different things.
    To represent the 8 billion people on the planet:
    You could also try to get to the bits from a decimal number:
    Count the number of bits in the list above:

    Encoding Images

    What do you see beyond zeros and ones?

    I have a matrix of 1s and 0s:
    Let me lay it out nicely:
    Let’s convert the bits into an image, 1s denoting white pixel, 0s denoting black:

    Encoding colored images

    What if I had a more colorful smiley?
    Modern systems often use 24 bits to specify one color.
    How many possible colors?
    The following shows how an image is encoded as Red-Green-Blue color values for each pixel.
    Given those values for each pixel you can recreate the image:
    The number of pixels in the image can be found as follows:
    If the color of each pixel was to be represented by at least 24 bits, number of bits needed to store the image would be:
    In reality information about each pixel of this image is stored using ~96 bits. So total number of bits needed to store the image:

    What to do with lots of bits?

    8 bits = 1 byte
    How many bytes in 49,766,400 bits?
    There are larger units for measuring bits:
    How many kilobytes (kb)?
    How many megabyte (mb)?

    Compression

    Lots and lots of bits! Are all of them needed?

    So one image requires: 6,220,800 Bytes = 6.2 MB.
    What about a one-hour recorded lecture with 24 images every second?
    That is ~535 GB.
    That's why we don't usually send videos through e-mail: it's a lot of information!

    What do humans do?

    Human languages use a trick to reduce the effective size of ideas.
    In particular, frequently used ideas use short words.
    Uncommon ideas require long words.

    What do modern technologies do?

    Modern technologies use compression a lot. They use the idea that common ideas require fewer bits (like shorter words); usually a few GB is enough for a one-hour video.
    Uncommon videos require more bits (like longer words).

    Let’s Compress

    What is Computation?

    The second key concept to be discussed in this class: computation
    Given some information, answer a question, or make a decision.
    Humans are good at such things. So are computers. Let's consider some examples.

    Travel Directions

    The shortest distance between Champaign and Atlanta:
    But you cannot drive down that straight line. So how about the shortest drive from Champaign to Atlanta:
    How far do you have to drive?
    How long will it take?

    Identifying Images

    Can you identify what we see in these images?

    Social Network Analysis

    Here is an example of a social network among a dolphin population:
    We can computation to find communities within this social network, groups of dolphins that interact more with members of the group rather than members outside the group.

    Church-Turing Hypothesis

    Alan Turing created the Turing Machine (1936) - which is defined as an abstract representation of a computing device.
    Alonzo Church proposed the hypothesis (1936) - “Every computation that can be carried out in the real world can be effectively performed by a Turing Machine.”
    So Church-Turing hypothesis in simple words: “Computers and humans can compute the same things”.
    Of related interest: Principle of Computational Equivalence by Stephen Wolfram.

    Terminology You Should Know from These Slides

  • information
  • computation
  • technology
  • bit (BInary digiT) and Byte (8 bits)
  • Kilo, Mega, Giga, Tera, (Peta, Exa)
  • compression
  • Concepts You Should Know from These Slides

  • information is stored as bits (0s and 1s)
  • information is stored as physical quantities: voltage or magnetic state
  • how to find the number of bits needed to represent one thing from a group of things
  • compression is used to reduce the number of bits needed for more common things in a group
  • humans and computers are (in theory) equally capable of computation, given sufficient memory
    and time (the Church-Turing hypothesis)
  • Wolfram Cloud

    You are using a browser not supported by the Wolfram Cloud

    Supported browsers include recent versions of Chrome, Edge, Firefox and Safari.


    I understand and wish to continue anyway »

    You are using a browser not supported by the Wolfram Cloud. Supported browsers include recent versions of Chrome, Edge, Firefox and Safari.