Monday, April 30, 2007

Comments on the death of computing

This article is starts off as a complaint or a lament in the area of edge CS, and probably serves as a warning, though the conclusion is probably not as hopeful or optimistic as it could be. Or it could possibly be the lack of imagination. To start:
There was excitement at making the computer do anything at all. Manipulating the code of information technology was the realm of experts: the complexities of hardware, the construction of compliers and the logic of programming were the basis of university degrees.
...
However, the basics of programming have not changed. The elements of computing are the same as fifty years ago, however we dress then up as object-oriented computing or service-oriented architecture. What has changed is the need to know low-level programming or any programming at all. Who needs C when there's Ruby on Rails?
Well, part of it is probably a lament by the author--presumably a scholar--on the loss of status and the general dilution in the quality of people in the field. And the other part is about how there's nowhere interesting left to explore in the field.

To address the first part, it's well known that engineers, programmers (or any other profession) likes to work with great and smart people. Usually, when a leading field explodes you're going to attract these great and smart people to the field. However, the nature of the field of technology is to make doing something cheaper, faster, or easier. And as technology matures, the more the barriers to entry in the field lowers. And as a result, you'll get more people that couldn't make it before in the field and the average quality of people dilutes. People use to do all sorts of research on file access. But now, any joe programmer doesn't think about any of that and just uses the 'open' method to access files on disk. But that's the nature of technology, and it's as it should be.

The environment within which computing operates in the 21 century is dramatically different to that of the 60s, 70s, 80s and even early 90s. Computers are an accepted part of the furniture of life, ubiquitous and commoditised.
And again, this is the expected effect of technology. Unlike other professions, in engineering one is able to make technology which gives people leverage over those that don't use it. This gives the advantage of acceleration and productivity that's scalable that you won't find in other professions. If you're a dentist, there is an upper limit to the number of patients you can see. In order to be even more productive, you'll need to create a clinic--a dentist farm--to parallelize patient treating and you need other dentists to do that. If you're an engineer, the technology that you build is a multiplier, and you don't even need other people to use the multiplier.

But at a certain point, the mass adoption of a technology makes it cheaper, and hence, your leverage over other people isn't that great, and you begin to look for other technologies to make your life easier or give you an edge over your competition. But these are all applications arguments to CS; while important in attracting new talent, it doesn't address where the field has yet left to go on the edge.

As for whether CS is really dead or not, I think there's still quite a bit of work to be done at the edges. Physics in the late 1800's claimed that there wasn't much interesting going on there until General Relativity blew up in their face. Biology has had its big paradigm shift with Darwin, but there's still a host of interesting unknown animals being discovered (like the giant squid) and I'm sure alien biology or revival of Darwin's sexual selection would help open up another shift. Engineering suffered the same thing in the early 1900's, when people with only a background in electromechanical and steam powered devices thought there wasn't much left to invent or explore, until the advent of computing spurred on by the Second World War.

In terms of near-term computing problems, there's still a lot of work to be done in AI, and all its offshoot children, such as data mining, information retrieval, and information extraction. We still can't build software systems reliably, so better programming constructs are being ever-explored. Also, since multi-core processors are starting to emerge, so better concurrent programming constructs are being developed (or rather, taken up again...Seymour Cray was doing vector processors a long while back)

But I'm guessing the author of the article is looking for something like a paradigm shift, something so grand that it'll be prestigious again, and attract some bright minds again.

In the end, he is somewhat hopeful:
The new computing discipline will really be an inter-discipline, connecting with other spheres, working with diverse scientific and artistic departments to create new ideas. Its strength and value will be in its relationships.

There is a need for innovation, for creativity, for divergent thinking which pulls in ideas from many sources and connects them in different ways.
This, I don't disagree with. I think far-term computing can draw from other disciplines as well as being applied to others. With physics, there's currently work on quantum computers. In biology, there's contribution to biology from bioinformatics and the sequencing of genes, as well as drawing from it like ant optimization algorithms and DNA computers. In social sciences, there's contribution to it using concurrent and decentralized simulation of social phenomenon, as well as drawing from it like particle swarm optimization.

One day, maybe it will be feasible to hack your own bacteria, and program them just as you would a computer. And then, a professor might lament that any 14 year old kid can hack his own lifeform when it use to be in the realm of professors. But rest assured, there will always be other horizons in the field to pursue.

Sunday, April 29, 2007

Ruby Quiz #122 Solution: Checking Credit Cards using meta-programming

So this is the first time I actually did a RubyQuiz for real. I spent probably 3 or 4 hours on it. Not too shabby. And, I got to do a little bit of meta-programming! It's basic meta-programming, but I liked the solution. Brief intro to the quiz:
Before a credit card is submitted to a financial institution, it generally makes sense to run some simple reality checks on the number. The numbers are a good length and it's common to make minor transcription errors when the card is not scanned directly.

The first check people often do is to validate that the card matches a known pattern from one of the accepted card providers. Some of these patterns are:

      +============+=============+===============+
| Card Type | Begins With | Number Length |
+============+=============+===============+
| AMEX | 34 or 37 | 15 |
+------------+-------------+---------------+
| Discover | 6011 | 16 |
+------------+-------------+---------------+
| MasterCard | 51-55 | 16 |
+------------+-------------+---------------+
| Visa | 4 | 13 or 16 |
+------------+-------------+---------------+
There's more rules for each credit card at wikipedia. So normally, how would you do this with OO design? First thing that came to mind was creating a general CreditCard base class, and use polymorphism to implement the rule for each type of card, which is a subclass of CreditCard (i.e. Mastercard extends CreditCard). The problem with this, I've always found is that there's a proliferation of classes when you do something like this. People have solved this problem with other patterns, such as Factories, to build families of classes.

But that's a lot of structure that I didn't want to write for a little RubyQuiz. So I opted for case statements at first:
def type(cc_num)
case cc_num
when /^6011.*/
return :discover if cc_num.length == 15
when /^5[1-5].*/
return :mastercard if cc_num.length == 16
...other card rules...blah blah blah
end
return :unknown
end
But as we all learned from having to maintain a proprietary server program written in C nested 11 or 12 layers deep all in one main file, case statements suck and don't scale (guess who had to do that?). So what's a better solution? I'd like to think I came up with a nice one.

With dynamic programming languages, I find that a lot of the problems that design patterns solve simply go away. And with meta-programming, it can be a much more flexible tool to solve design problems, rather than with design patterns. In a way, I created a very very tiny domain specific language for checking credit card type and validity. All you need to do to use it is define the rules in the table above in your class which subclasses credit card checker:
require 'credit_card_checker'

class MyCreditCardChecker < CreditCardChecker
credit_card(:amex) { |cc| (cc =~ /^34.*/ or cc =~ /^37.*/) and (cc.length == 15) }
credit_card(:discover) { |cc| (cc =~ /^6011.*/) and (cc.length == 16) }
credit_card(:mastercard) { |cc| cc =~ /^5[1-5].*/ and (cc.length == 16) }
credit_card(:visa) { |cc| (cc =~ /^4.*/) and (cc.length == 13 or cc.length == 16) }
end

CCnum = "4408041234567893"
cccheck = MyCreditCardChecker.new
puts cccheck.type(CCnum) # => :visa
puts cccheck.valid?(CCnum) # => true
Neat! So this way, you can have any type of credit card checker you want, in any combination. And if suddenly there was a proliferation of new credit card companies, you can add them pretty easily. How is this done? Well, let me show you:
require 'enumerator'

class CreditCardChecker
def self.metaclass; class << self; self; end; end

class << self
attr_reader :cards

def credit_card(card_name, &rules)
@cards ||= []
@cards << card_name

metaclass.instance_eval do
define_method("#{card_name}?") do |cc_num|
return rules.call(cc_num) ? true : false
end
end
end

end

def cctype(cc_num)
self.class.cards.each do |card_name|
return card_name if self.class.send("#{card_name}?", normalize(cc_num))
end
return :unknown
end

def valid?(cc_num)
rev_num = []
normalize(cc_num).split('').reverse.each_slice(2) do |pair|
rev_num << pair.first.to_i << pair.last.to_i * 2
end
rev_num = rev_num.to_s.split('')
sum = rev_num.inject(0) { |t, digit| t += digit.to_i }
(sum % 10) == 0 ? true : false
end

private
def normalize(cc_num)
cc_num.gsub(/\s+/, '')
end
end
If you don't know much about meta-programming yet, you might want to try _why's take on seeing metaclasses clearly along with Idiomatic Dynamic Ruby. Don't worry if it takes a while...I was stumped for a while also.

Anyway, the magic is in the method credit_card. Notice it's between "class << self" and "end", which means that this method is defined in the singleton class of the class CreditCardChecker. But you can just think of it as a class method. Same thing with the method metaclass(), it is a class function that returns the singleton class of the caller.

Now, the thing is, this isn't very exciting in itself. However, notice that credit_card() is executed in the subclass MyCreditChecker. This means that when inside credit_card(), metaclass returns NOT the singleton class of CreditCardChecker, but the singleton class of MyCreditCardChecker! Then when we proceed to do an instance_eval() and a define_method(), we are defining a new method in the singleton class of the subclass MyCreditChecker. Inside the method, it will call the block that evaluates the rule given for that card. If true, it returns true and false if false. The only reason I did it that way, is so in case the block returns an object, it'll return true instead of the object.

Therefore, to any instance of MyCreditChecker, it will look like there's a class method with the name of the credit card. So if you did:
require 'credit_card_checker'

class MyCreditCardChecker < CreditCardChecker
credit_card(:amex) { |cc| (cc =~ /^34.*/ or cc =~ /^37.*/) and (cc.length == 15) }
end
MyCreditCardChecker.amex?(cc_num) would be a valid method that checks if the credit card number is an American Express Card. And what cctype() method does is that it cycles through all the known credit cards and returns the first one that's valid. The rest is standard fare, so I won't go through it.

And oh, btw, each_slice() and each_cons() got moved to the standard library, so you have to include enumerator in order to use it--even though the official ruby docs say that it's still in the Enumerables class in the language.

Saturday, April 28, 2007

Inconsistent virtual realities for social augmentation

Cognitive Daily: If you want to persuade a woman, look straight at her:
"There is a considerable body of research showing that eye contact is a key component of social interaction. Not only are people more aroused when they are looked at directly, but if you consistently look at the person you speak to, you will have much more social influence over that person than you would if you averted your gaze....Since each individual's virtual experience is generated separately, in a "room" full of people, each person could experience the phenomenon of everyone else looking at them. Everyone can be the center of attention, all at the same time!"
That's an interesting way to view things. I hadn't thought too much about that, since generally, simulations and games work hard to maintain game state world consistency.

But as we know from Horchow and Carnegie, people are interested mostly in themselves. Inconsistent realities to facilitate or even manipulate social interactions is both fascinating and a bit unnerving due to its immediate implications of social engineering, as most modern people in the western world in this day and age believe in free will.

However, I think it can certainly put to good use, especially in terms of customer service, to help make a customer feel like they are getting special and speedy attention. In the future, if there are Non-Player Characters who are store clerks in either augmented or virtual realities, a customer can have the benefit of seemingly personalized attention.

I can see this implemented in a physical store, where a customer walks in and an augmented store clerk helps them out. And if two customers, say two girls out shopping together, are listening to the same augmented store clerk, one can change the image to make it seem like the clerk is addressing them both at the same time.

As for the article's claim of gender differences, the sample size is pretty small, given only 6 male pairs and 6 female pairs for each of the 3 study groups. But the difference between genders are pretty significant in the graph...and I don't see any manipulation of the graph to make results seem more significant than they are offhand.

Friday, April 27, 2007

Adobe open sources Flex, it'd be nice for mobile too

Now that's news. I think it's a good strategy on their part, since there's still work to be done in the adoption phase of user interfaces, both on the web and mobile devices. What is most interesting is if Adobe plans to use some version of Flex as a platform for mobile devices. Currently, it's done in JavaME, and after trying it out, it was hard, because the tools were still a bit inadequate, and the fact that it's still not easy to get applications on to phones.

With an open sourced language for rich/heavy front-ends, I wouldn't be surprised if this gains quick adoption, as I see just OpenLaszlo and Microsoft's Silverlight as being the alternative. AJAX will have to come up with other tricks up its sleeve, like faster javascript engines...This whole scene will be something to keep an eye on, as it'll be interesting how it plays out.

Thursday, April 26, 2007

Reconnecting to database server in Rails

I've had more posts up my sleeve, though I haven't had time to actually polish them up. I should make my blog posts go back to its roots, where I just said anything as a first draft. That way, you'll get more stuff. So as usual, I happened across my travels through Rails-land and saw something that I don't think gets seen too often...since I couldn't find it on the first page of Google. It was an error like this:
>> user = Account.find(1)
ActiveRecord::StatementInvalid: Mysql::Error: MySQL server has gone away:
SELECT * FROM accounts WHERE (accounts.id = 1) from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.0/lib/active_record/
connection_adapters/abstract_adapter.rb:128:in `log'
...blah blah blah...
Since connections are expensive (in terms of time) to make, web frameworks, and anyone making raw connections to the database, will use the same connection for multiple SQL queries, and close the connection when you're done.

Usually you won't see this in Rails, because it does a pretty good job of maintaining the connection, either per session, or per user action in the controller. However, when you have a background process running using something like BackgrounDrb, if there is no activity between the background worker and the database for a couple hours, the database is going to close the connection, and the worker will still think the connection is valid. In other words, ActiveRecord::Base.connected? will return true.

Here is also where I found a use for 'else' in blocks as mentioned by Jamis Buck. When the connection goes out cold, we can't really tell that its' because it's been sitting there too long. It will raise an ActiveRecord::StatementInvalid, which is the same thing raised when you have a bug during development. As a simple fix, I just wanted something to try reconnecting to the database once, just in case it was only because the connection was cold.
class SomeBackgroundWorkerClass
def initialize
@already_retried = false
end

def some_database_operation
begin
Account.find(1)
# or some other database operations here...
rescue ActiveRecord::StatementInvalid
ActiveRecord::Base.connection.reconnect!
unless @already_retried
@already_retried = true
retry
end
raise
else
@already_retried = false
end
end
end
So, that way, as long as it succeeds every other time, it'll keep on going. Tip!

Tuesday, April 17, 2007

Log files in XML, YAML, or JSON?

Currently, log files are almost always in a form that is hard for machines to parse. It's either in a comma separated form, or an arbitrary proprietary format. Why is that? The primary assumption of log files is that a human will read it. But usually, no humans read it unless something goes wrong, and it's always in a reactive sense.

Of course, no human wants to look at log files all day long. This is the kind of thing that machines would be great at...if only they could read it. What we can do to help log file processing is to put it into formats that are easily transferable and readable by both humans and machines. Isn't that the primary goal of data formats such as XML, YAML, and JSON? A machine that can read log files can monitor it and do analysis on it to present information to users that wouldn't be apparently when just reading the log file straight through.

And yet, most of our log files are in proprietary formats, especially for web servers and web applications. This might not be as much of a problem for long-standing programs like Apache. They've been around long enough that their log file has stabilized and there are specialized programs to parse and analyze those log files.

In addition, I think (correct me if I'm wrong), JSON format allows you to carry code as if it were data. Having that code to perform specific transformations on the log data when processing it might be something useful. Therefore, it would be like having transformed data transparently available to the parsing/analysis program. It would also cut down on the amount of extra programming that is needed for the analysis program, since the log would know how to generate specific pieces of information not explicitly written in itself.

Monday, April 16, 2007

Code quickie: How to interlace two arrays in ruby

Hrm, I'm not sure whether it's worth posting or not, but I was looking for a way to interlace two arrays in ruby. This is what I came up with originally, and it worked fine for a bit:
class Array
def interlace(other_array)
interlaced_array = []
self.each_with_index { |x,i| interlaced_array << x << other_array[i]}
return interlaced_array
end
end
This code has the problem that on arrays of different sizes, it'll either leave off the longer array's remaining elements or insert nils for the shorter array. This isn't a good default behavior. What we'd like is for the longer array, whichever one it is, to get its remaining elements tacked on the end of the interlaced array after the elements in the shorter array have run out.

I decided to do it recursively. I haven't written anything recursive since that post on Erlang.
class Array
# interlaces an array with another array. It dovetails the two arrays together.
#
# [1,2,3,4,5,6].interlace([7,8,9]) # => [1, 7, 2, 8, 3, 9, 4, 5, 6]
#
# [1,2,3].interlace([1,2,3,4,5]) # => [1, 1, 2, 2, 3, 3, 4, 5]
def interlace(other_array)
return other_array if self.empty?
return [self[0]] + other_array.interlace(self[1..-1])
end
end
Great, now it works on different sized arrays! What you'll notice is that unlike most recursions, this one "switch places" with the other array on every recursion with the call, other_array.interlace(self[1..-1]). It's the first time I've seen a recursion like this. It certainly simplifies the code immensely, since you don't have to check for which array is bigger or smaller. Note, however, that this only works because the method is public. The technique doesn't work for private recursion helpers.

While you don't get to use recursion all that often, I find that its solution is often pretty elegant compared to iteration. I think that it will be useful when we start to use more data structures that are more fractal in nature. Currently, we have lists and trees. Hrm, there might be some possibilities here. For now, we'll keep this short, and if I come up with anything on this front, I'll let ya'll know! Tip!

Monday, April 09, 2007

Updating just the join table

Having a model that has a has_and_belongs_to_many relationships with another model affords you the convenience of a bunch of added on methods that get created when you define the relationship. These are all pretty nice. But I found that I had to forgo these methods for a more crude method.

Let's say you have two models, taken from the Rails book: Article and User.

class Article < ActiveRecord::Base
has_and_belongs_to_many :users
end

class User < ActiveRecord::Base
has_and_belongs_to_many :articles
end

In order to create a new article and associate it to a user right away, you can use create!:
user = User.find(session[:user].id)
user.articles.create!(:title => "The Art of FizzBuzz")

But sometimes, an article might be linked to other models as well. Let's say that there's a Shelf model, and an Article habtm Shelves too. Then, you'd have to pull something like:
user = User.find(session[:user].id)
shelf = Shelf.find(params[:shelf_id])
article = user.articles.create!(:title => "Go and foobar yourself")
shelf.articles << article


Now, that last line is tricky. It's adding the new article to the articles of a shelf. Technically, it should just be inserting ids in the join model. However, that's not the case. It will ask shelf to load all its articles first, and then update the join table. Now, if you're going to manipulate articles of that shelf later on in the controller method, I think this would be the way to go.

However, if you're importing articles from the net, that might not work so well. In that case you just needed to add the association in articles to shelves in the join table. The current implementation of <<, concat, and push seems to enforce an explicit query for it at least once.

Therefore, if "shelf" has a lot of articles, then you'll experience a large slowdown in importing your articles--for every new article, you're asking the database to return a list of all current articles on that shelf. Database caches common queries, but in this case, it doesn't help, since you're importing a new article every time, which can belong to different shelves. But the time you come back to the same shelve, it may have been cleared from the cache already.

This is very much like Joel's story about Shlemiel the Painter. It's not that <<, concat, push is implemented poorly, but that it's used for a different scenario with different assumptions--that you're going to be doing other things to the collection within the scope of the controller method.

The only solution I've come up with is an ugly one. I created a model out of the join table, and added a method called link. It finds the associated link, and if it doesn't find one, it creates it.
class ArticlesShelves < ActiveRecord::Base
def self.link(article, shelf)
find_by_article_id_and_shelf_id(article.id, self.id) ||
create!(:article_id => article.id, :shelf_id => shelf.id)
end
end


This has lowered the importing of articles from a minute and a half for each article belonging to a shelf with lots of articles, to about 0.5 second for each article on a low powered machine. I personally don't like this solution, since it introduces a very specialized model object with only one purpose, rather than a cohesive set of responsibilities.

While it is possible to push the method "link" to both Article and Shelf, I'm not sure exactly how to query for just the join table if the active record counterpart ArticlesShelves does not exist, other than using find_by_sql(). But even then, how do you execute an "insert" SQL query?

If you've got a better solution, let's hear it. :)

Saturday, April 07, 2007

Erlang and neural networks, part II

Two weeks ago, I did a post about Erlang (Part I), and how a simple feed-forward neural network might be a nice little project to do on the side, just to learn about Erlang. Here's what came next.

State of the Purely Functional

In the transition from imperative/procedural programming to functional programming, there are obviously things that you have to get over. You'll hear this from a lot of people just learning functional programming for the first time (myself included). The hardest thing for me to get over in a pure functional language is the absence of state. My first reaction was, "Well, how do you get anything done?"

Not having state has its advantages, and you'll hear stuff about side-effects and referential transparency. But I'd like to think of it as, things that don't have state can't be broken--they just exist. However, state is useful in computation, and different languages have different ways of getting around it. With Haskell, you use monads. At first, I figured it was the same with Erlang. But in this short tutorial on Erlang, it simply states that Erlang uses the threads to keep state.

This maps pretty well with what I'm trying to do. Each perceptron will be a thread, and send messages back and forth to each other as they fire and stimulate each other.

The essence of a perceptron




So once again, this is a perceptron. It's a weighted sum (a dot product) of the inputs, which is then thresholded by f(e). So we'll write a thresholding function and a weighted sum in Erlang.

We start by declaring the name of the module, and the functions to export from the module.
-module(ann).
-export([perceptron/3, sigmoid/1, dot_prod/2, feed_forward/2,
replace_input/2, convert_to_list/1]).
I exported most of the functions, so I can run them from the command line. I'll remove them later on.

First we write our thresholding function. We will use the sigmoid function as our thresholding function. It's pretty easy to explain. A value, X goes in, another value comes out. It's a math function.
sigmoid(X) ->
1 / (1 + math:exp(-X)).
Since I wasn't as familiar with all the libraries in Erlang, and I wrote a dot product function, and it wasn't too bad. Erlang, for the most part, doesn't use loops, just as Ruby doesn't. They both can, if you want to write a FOR control function, but the common way is to use library functions for list processing, list comprehensions, or recursion. The first part is the base case, and the second part is what you'd do if the "recursion fairy" took care of the rest.
dot_prod([], []) ->
0;
dot_prod([X_head | X_tail], [Y_head | Y_tail]) ->
X_head * Y_head + dot_prod(X_tail, Y_tail).
Simple, so far, right? So to calculate the feed forward output of a perceptron, we'll do this:
feed_forward(Weights, Inputs) ->
sigmoid(dot_prod(Weights, Inputs)).

The body of a nerve

So far, so good. But we still need to create the actual perceptron! This is where the threads and state-keeping comes up.
perceptron(Weights, Inputs, Output_PIDs) ->
receive
{stimulate, Input} ->
% add Input to Inputs to get New_Inputs...
% calculate output of perceptron...
perceptron(Weight, New_inputs, Output_PIDs)
end.
This is a thread, and it receives messages from other threads. Currently, it only accepts one message, stimulate(Input) from other threads. This is a message that other perceptrons will use to send its output to this perceptron's inputs. Notice that at the end of the message, we call the thread again, with New_Inputs. That's how we will maintain and change state.

Note this won't result in a stack overflow, because Erlang somehow figures out not to keep the call stack. I'm guessing it knows it can do so, since no state is ever kept between messages calls that everything you need to know is passed into the function perceptron, so we can throw away the previous instances of the call to perceptron.

We do come to a snag though. How do we know which other perceptron the incoming input is from? We need to know this because we need to be able to weight it correctly. My solution is that Input is actually a tuple, consisting of {Process_ID_of_sender, Input_value}. And then I keep a list of these tuples, like a hash of PID to input values, and convert them to a list of input values when I need to calculate the output. Therefore, we end up with:
perceptron(Weights, Inputs, Output_PIDs) ->
receive
{stimulate, Input} ->
% add Input to Inputs to get New_Inputs...
New_inputs = replace_input(Inputs, Input),

% calculate output of perceptron...
Output = feed_forward(Weights, convert_to_list(New_inputs)),

perceptron(Weights, New_inputs, Output_PIDs)
end.

replace_input(Inputs, Input) ->
{Input_PID, _} = Input,
lists:keyreplace(Input_PID, 1, Inputs, Input).

convert_to_list(Inputs) ->
lists:map(fun(Tup) ->
{_, Val} = Tup,
Val
end,
Inputs).
The map function you see in convert_to_list() is the same as the map function in ruby that would go:
def convert_to_list(inputs)
inputs.map { |tup| tup.last }
end
Now, there's one last thing that needs to be done. Once we calculate an output, we need to fire that off to other perceptrons that accept this perceptron's output as its input. And if it's not connected to another perceptron, then it should just output its value. So then we end up with:
perceptron(Weights, Inputs, Output_PIDs) ->
receive
{stimulate, Input} ->
New_inputs = replace_input(Inputs, Input),
Output = feed_forward(Weights, convert_to_list(New_inputs)),
if Output_PIDs =/= [] ->
lists:foreach(fun(Output_PID) ->
Output_PID ! {stimulate, {self(), Output}}
end,
Output_PIDs);
Output_PIDs =:= [] ->
io:format("~n~w outputs: ~w", [self(), Output])
end,
perceptron(Weights, New_inputs, Output_PIDs)
end.
We know which perceptrons to output to, because we keep a list of perceptron PIDs that registered with us. So if the list of Output_PIDs is not empty, then for each PID, send them a message with a tuple that contains this perceptron's PID as well as the calculated Output value. Let's try it out:

Test Drive


1> c(ann).
{ok,ann}
2> Pid = spawn(ann, perceptron, [[0.5, 0.2], [{1,0.6}, {2,0.9}], []]).
<0.39.0>
3> Pid ! {stimulate, {1,0.3}}.

<0.39.0> outputs: 0.581759
{stimulate,{1,0.300000}}
4>
So you can see, we got an output of 0.581759. We can verify this by doing this on our TI-85 calculator:
x = 0.5 * 0.3 + 0.2 * 0.9
Done
1 / (1 + e^-x)
.581759376842
And so we know our perceptron is working feeding forward! Next time, we'll have to figure out how to propagate its error back to adjust the weights, and how to connect them up to each other.

Erlang and Neural Networks Part I
Erlang and Neural Networks Part II
Erlang and Neural Networks Part III

Thursday, April 05, 2007

Google Maps of the World of Hello World

Google maps now allows the ability to create your own maps. The title link is a map of the major programming languages in use in the world. So it's like lots of little hellos around the world. Cute. Looks like Africa, India, Australia, and South America have a lot of catch up to do. They don't do all the languages in the world, but Japan would also have quite a few if it listed all the variants in the Lisp family.

It's also noticeable that the coasts dominate with programming languages. UIUC's gotta step up.

But most significantly, making your own maps has been a long time coming, and I would have originally thought they'd leave mappr.comFrappr alone in this field. But I think it makes strategic sense for them, especially if they make it easy to post maps that people create.

Tuesday, April 03, 2007

"Web 3.0" and "Killer App" sound like "Crystal Ball" to me

Ahh, web 3.0.

Indeed, as nanobeeper's asks and puts into perspective, What's up with the web 2.0 angst?, there doesn't seem to be a need to get bent all out of shape over the term. And yet, I usually don't use the term myself and am pretty reluctant to, for fear of being someone-who-doesn't-know-what-they're-talking-about, like the braying butthole in Jeffery Zeldman's famous post. It's what happens when marketers get out of control, and generally, it applies when someone that knows just enough to be dangerous. Fanboys of Japan is a good example. If you meet someone that LOVES Japan, they've either only watched anime (or been to Japan once or twice), or they've lived there for at least a decade. Usually the former.

But what I want to post here today isn't want I think is or isn't web 3.0, but more about the usage of the term. Why do people use it?

Killer app 3.0

It's an interesting parallel that ever since Visicalc came out and the term "killer app" was termed, people since then has been talking about the "killer app" on this platform or that. "The killer app of the web is..." "The killer app of the mobile phones is..." It's certainly reminds me of the way people talked about web 2.0. "Web 2.0 is...." "Web 3.0 is..."

The similarity between talking about web 3.0 and talking about killer apps is that when people talk about them, they're using those terms to try to communicate what they see, predict, or would like to be the future. Technologists are, if anything, always looking for the Next Big Thing. We're use to change, and in fact, we thrive on it. We're all interested in the future of change because if we're right about it, that kind of information is an advantage over whatever our goals are. But as we all know, predicting the future is well, inaccurate at best.

I'd trade intelligence for hindsight

Often times, we have limited scope, experience, and knowledge. That certainly will affect what we think to be in the realm of the possible and what will be in the realm of the impossible. If you look back on quotes about technology predictions, some of them might stun you at how stupid they are. But then again, you have the gift of hindsight. Keep in mind what technology was available at the time for them to relate to the new tech, as well as the fact that first iterations of any product sucks--as Guy Kawasaki so famously points out. (If you want to read more, they're from wikipedia)
"Heavier-than-air flying machines are impossible." -- Lord Kelvin, British mathematician and physicist, president of the British Royal Society, 1895
"Who the hell wants to hear actors talk? The music — that's the big plus about this." Warner Bros. was investing in sound technology though Henry Warner was more excited about the potential of scoring over dialogue. [3]
"Caterpillar landships are idiotic and useless. Those officers and men are wasting their time and are not pulling their proper weight in the war." -- Fourth Lord of the British Admiralty, 1915
"The wireless music box has no imaginable commercial value. Who would pay for a message sent to no one in particular?" -- Associates of David Sarnoff responding to the latter's call for investment in the radio in 1921.
"While theoretically and technically television may be feasible, commercially and financially it is an impossibility, a development of which we need waste little time dreaming." -- Lee DeForest, American radio pioneer and inventor of the vacuum tube, 1926
The last two quotes are notable. Lee DeForest, who had enough foresight and innovation to see that a radio had value, couldn't see beyond that to see how a television would have value just five years later. We all have limited breadth and imagination, but some people are worse than others. It would do you well to ignore those people. Sometimes you can recognize them if they counter with "Why would I ever do [insert whatever idea you just told them]"

And even if we had perfect scope, perfect breadth, it would still be hard. Predicting the future is computationally intensive.

As for myself, I didn't immediately see the value of social networks apps until Facebook showed up, even though I read research papers on social networks. And currently, I don't really get Twitter and Scribd, but the fact that people are using it, well, there's value somewhere in there.

So what's the chorus in all the noise?

So where does that leave us with Web 3.0? If you look at it as people merely trying to say what their predictions about the future of the web, it doesn't conjure up as much anger, because you know they may very well be wrong. But collectively, what everyone predicts to be web 3.0 will have some value because part of it might be a self-fulfilling prophecy. If we all say it's true, you can be sure that some of us will work to make it true.

Based on that flash in the pan, I was curious. What was the collective consensus on what web 3.0 is? I looked in two places. Wikipedia and del.icio.us. Just from eyeballing it, it seems to be that people are in consensus, at least about the semantic web. This would be the type of thing that Inkling Markets would be good for. I created a market for it, if you're so inclined to buy stock on web 3.0.

So take what any individual says to be the future with open mind and a grain of salt, but really pay attention to where the global trend is moving. As Joe Kraus says, you want to see what the trend is, take it out of geek land, and ride that wave.