Artificial Intelligence: Possibilities and Practice


BY: Kevin Nicol, Applications Architect

“The second AI Winter is upon us!”

“Are you prepared for the Machine Learning takeover!?”

These were the top two headlines in the Artificial Intelligence (AI) section of my news feed a few weeks back.

There is an insane amount of optimism abound about AI, paired with an almost equal amount of doom and gloom.

It’s no wonder the tech industry is struggling to come to terms with what the future should look like, but if you want to see what the progression will look like over time, take a trip back just one single decade and look at the headlines surrounding cloud computing.

Everyone was just as confused then as they are now.

Some of the predicted cloud technologies are nowhere to be seen, but most have quietly crept into our lives without any real derailment.

I’ve just now hit save and this document is being backed up to my cloud-based file storage server with seemingly unlimited capacity over a world-wide network at speeds such that I don’t even consider it.

Seriously, when was the last time you were concerned about download speeds? The 10-year-old Kevin trying to fit a half dozen MP3s on a tiny hard drive after spending four hours downloading them on dial-up internet is beside himself.

What will 50-year-old me think of AI?

Marty McFly on his hoverboard in a depiction of 2015 from 1989’s Back To The Future II. Like many predicted technologies, the hoverboards did not pan out.


This is more relevant than ever at Remsoft.

As we are now fully embarked on our cloud implementation strategies, and the story of being a big-data entity on the cloud usually has several chapters on AI/Machine learning. Large amounts of data and a large computing capacity is where Remsoft has been for decades, so what more can we do with AI?

I recently had the opportunity to attend Big-Data/AI Toronto where some of the largest players in the space were showcasing, and they’re doing some cool stuff. There were companies that trained AI/Machine learning models on thousands of horror movies to automatically create a movie trailer for a new horror movie being released. There were also companies doing more mundane things such as reading thousands of legal contracts to hunt down phrasing more likely to cause lawsuits.

The AI hype machine was in full operation and people were generally buying into what the speakers were selling. I must admit, the future certainly looks bright in this field, but it has been that way for some time.

The concept of a “Deep Learning Algorithm” is something many are talking about, but you don’t have to look too hard to find the newly converted within the crowd.

This isn’t because it’s a new thing.

It’s a new, catchy spin on a very old thing

In The Age of Intelligent Machines, author and software developer Raymond Kurzweil describes a deep learning algorithm as it’s known in academia: a neural net. It’s a program that models itself like the human brain. I read this book in high school, and it wasn’t new then (it was published in 1990).

The algorithm Kurzweil made me obsess about was old even when he wrote about it, with much of the research dating back to the 70’s, some of it tracing as far back as 1948.

The issues of the time were not about the algorithm but about the computational speeds, and the amount of data we could reasonably store, ask questions of, and receive answers in a reasonable time frame. Cloud computing and our advances in big data have removed these barriers and the industry is getting to work.

Still, all this work has yielded only a few isolated major achievements that companies can speak about on the big stage. We mostly

SmarterChild, a circa-2001 chatbot that teens mercilessly abused on MSN and AOL

just see awkward chatbots that try to help us as we browse for car insurance, but there is certainly a broader use of machine learning currently used in a variety of verticals from meteorology to financial market prediction.

They are doing real, useful work daily, but they are also hidden from view. They are largely the domain of a few scientists in a corporation running these models to predict the best stock to pick.

The chatbot may be the first truly public facing AI construct, or certainly the first to garner general notice, but these chatbots are the equivalent of my aforementioned 5mb MP3s from the 90’s. Now that the barriers to AI research have been lifted, better instances of AI and machine learning applications will integrate into all applications.

The well of possibilities with AI runs very deep, but if you’re waiting for us to launch the platform where a few provided aerial photos and a folder of mill contracts suddenly turns into an optimized harvest schedule, don’t hold your breath.

That’s the conclusion people often jump to when they think AI, and it’s not unreasonable. Lidar data is changing our sector and giving us access to data with high accuracy over large scales, and similar document processing engines have already made their way into law firms.

The technology, data, speeds, and algorithms just aren’t ready to pull all the pieces together yet. Slow change, for the right reasons, is usually better than erratic, fast change.

In the meantime, what we at Remsoft will be doing is taking advantage of the advances in AI practices and methodologies, just like we have been doing with big-data and cloud technologies. They grow our capabilities and just become part of the tech stack. Small aspects of the software we don’t even think twice about anymore.

The hype is most certainly real, but it can also be misleading. Change will not come tomorrow, but it will come. There will be growing pains, small successes, and possibly large failures.

In the end, when the dust settles, you will likely not even notice the difference until you compare it with yesterday.