This post is mostly inspired by the Erlang master class taught by University of Kent. Though with some modifications. The master class is about building a simple calculator program in Erlang, with little information on parsing, which is done by hands there. However there are tools leex and yecc in Erlang designed to handle parsing part, and they are far less documented than their counterparts. So this blog post will, I hope, shrink the documentation gap.
It’s been, unfortunately usual, long pause in my writing mainly caused by huge workload and lack of time and themes to cover. I had several posts as drafts, but failed to write them to the end. Instead as a kind of escape from dead end I was stuck in, I decided to move the blog from jekyll to hugo. And additionally give it a distinctive name, though it is still hosted on the github pages.
Language identification, as it’s easy to guess, is the task of identifying the language of a document. For instance search engines may store the language of the indexed document and provide option such as Search for English results only as Google does. But in order to store the language, engine should determine it first.
Sometimes it’s a pleasure to abandon that very-cool-enterprise development, to take some book on algorithms and to solve couple of problems from it. Just to keep brain if not sharp, but at least not rusty. Also diving in some problem of that kind is the nice way to resurrect old math skills I was taught in the university.
Though you can configure basic logging for Azure using built in diagnostic tools sometimes it’s necessary to have more control on logging. So it might be a good idea to use log4net on top of the azure diagnostics and configure it to fit all your needs.
I’m exploring and trying to learn Haskell, or at least get better understanding of functional programming. So I thought it would be interesting to tease the brain and reimplement some of Haskell parts in my “mother tongue” C#.
In the previous post I explained how to create new Azure machine learning experiment and how to use linear regression to make predictions. The experiment is pretty cool itself as any other one, but there is one minor problem with it. It’s totally useless. Oh, you certainly can open it up and run all steps manually each time you need to predict profit (I’m still talking about previous post with synthetic experiment and synthetic dataset) based on city population, but, agree, it’s not very suitable. It would be nice if you had a way to store your trained model and supply data you need to analyze, and get back results.
It’s a really long pause between this post and previous one, but I hope to get pace and write more frequently.
This time we will review new Azure Machine Learning service that was announced recently and will solve that “boring” linear regression task (from my previous posts) using the ML service.
It’s been a while since the last post was written. So it’s time to create new one. I know, I promised to explain how to choose $ \alpha $ parameter and why it matters, but not this time.
In the first article on linear regression I promised to show how to do it better, so this post will be about truly scientific approach to the problem. Don’t worry if you don’t get it off hand. Honestly speaking it took some time for me to figure out what’s going on and even now from time to time I take some paper and draw matrices/vectors to be sure I’m doing everything right.