About us

We provide consultancy across the Artificial Intelligence field.

Founded in 2008, we have been providing consultancy and development to our partners for over 15 years. Our Founder Christopher Thomas writes and reviews AI publications, a selection are listed below.

Latent Diffusion and Perceptual Latent Loss

This article shows a novel approach to training a generative model for image generation at reduced training times using latents and using a pre-trained ImageNet latent classifier as a component of the loss function.

Super Resolution: Adobe Photoshop versus Leading Deep Neural Networks

How effective is Adobe’s Super Resolution compared to the leading super resolution deep neural network models? This article attempts to evaluate that and the results of Adobe’s Super Resolution are very impressive.

Super Convergence with Cyclical Learning Rates in TensorFlow

Super-Convergence using Cyclical Learning Rate schedules is one of the most useful techniques in deep learning and very often overlooked. It allows for rapid prototyping of network architectures, loss function engineering, data augmentation experiments and training production ready models in orders of magnitude less training time and epochs.

Deep learning image enhancement insights on loss function engineering

Insights on techniques and loss function engineering for Super Resolution, Colourisation, and style transfer.

Deep learning based super resolution, without using a GAN

This article describes the techniques and training a deep learning model for image improvement, image restoration, inpainting and super resolution.

U-Net deep learning colourisation of greyscale images

This article describes experiments training a neural network to generate 3 channel colour images from single channel greyscale images using deep learning. In my opinion the results, whilst they vary by subject matter are astounding, with the model hallucinating what colours should be in the original subject matter.

Recurrent Neural Networks and Natural Language Processing.

Recurrent Neural Networks (RNNs) are a form of machine learning algorithm that are ideal for sequential data such as text, time series, financial data, speech, audio, video among others.

Loss functions based on feature activation and style loss.

Loss functions using these techniques can be used during the training of U-Net based model architectures and could be applied to the training of other Convolutional Neural Networks that are generating an image as their predication/output.

U-Nets with ResNet Encoders and cross connections

A U-Net architecture with cross connections similar to a DenseNet

Random forests — a free lunch that’s not cursed

Random forests are one of a group of machine learning techniques called ensembles of decision trees, where essentially several decision trees are bagged together and take the average prediction.

An introduction to Convolutional Neural Networks

Describing what Convolutional Neural Networks are, how they function, how they can be used and why they are so powerful.

Tabular data analysis with deep neural nets

Deep neural networks are now an effective technique for tabular data analysis, requiring little feature engineering and less maintenance than other techniques.