Thinking About Neural Nets

  • 0

Thinking About Neural Nets

Category : Uncategorized

In the spirit of tackling real-world problems we don’t know the answer to, the newest focus of the data science world is “Machine Learning.” It’s a pretty general term that applies to just about anything that deals with teaching computers to do something in an indirect way. When I teach students how to create a Tic Tac Toe game, for example, they have to tell it how to make a move and so on, and there’s a lot of “if this, then that” kind of code:

for i in range(1,10):  # Check if computer can win on next turn
    if board2[i] == ' ':        # if a space is open
        board2[i] = computer    #fill it with computer's letter
        if isWinner(board2, computer): # if it's a win
            return i                # return that move
            else:                   # Otherwise
                board2[i] = ' '     # open that space again

With Machine Learning, the computer (eventually) learns the best move through repeated training trials, like thousands of them. The Go-playing AI  from Google that just defeated the World Champion learned by being shown tens of millions of board positions from hundreds of thousands of games. The idea is good moves are reinforced while bad moves are weeded out. An Artificial Neural Net is designed to work like a human brain, where input neurons are connected to a bunch of neurons, eventually leading to output neurons. Here’s a simple view:

The idea for such a network originated in the late ’50s, but its progress had hit a brick wall before Paul Werbos came up with the idea of backpropagation in the ’70s. That means the weights and values of the paths and neurons in the network are recalculated after a trial and the network gets feedback. The mathematical methods of minimizing error are the subject of many books and Doctoral theses, but suffice it to say they involve linear algebra, statistics and calculus.

Simple nets can be made with a surprisingly few lines of Python code. I worked with a few students at the Coder School last night, and even though none of them were ready to tackle a real net yet, we made sketches of how an impulse travels from the input neurons to the output. This is what it looks like so far. Click the nodes on the left to activate a random path through the network. Notice even when no longer activated this strengthens those paths a bit each time.

See the Pen Neural Net by Peter Farrell (@peterfarrell66) on CodePen.


The next step would be to use the numbers from the weights and values of the paths and nodes to realistically depict the forward- and back-propagation until it settles on the best path through the network!


Leave a Reply