Thanks to this emerging augmented intelligence, the comprehensive chronicling of patient care is poised to have a far-reaching impact on drug discovery and development.
May 6th, 2020
Forbes



Embeddings for sentence fragments harvested from a document can serve as extractive summary facets of that document and potentially accelerate its discovery, particularly when user input is a…
April 24th 2020
Towards Data Science



I recalled recently the very first time I searched something on Google. I was an engineering student in a remote part of India with temperamental internet connectivity. Disgruntled with the…
July 2nd 2019
Medium



While Deep Learning (DL) models continued to set new records in 2019, with state-of-art performance in wide variety of tasks, particularly in Natural Language Processing, 2019 marked the year where…
March 3rd 2020
Towards Data Science



BERT’s raw word embeddings capture useful and separable information (distinct histogram tails) about a word in terms of other words in BERT’s vocabulary. This information can be harvested from both…
July 4th 2020
Towards Data Science



The T5 model treats a wide variety of many-to-many and many-to-one NLP tasks in a unified manner by encoding the different tasks as text directives in the input stream. This enables a single model to…
November 7th 2019
Towards Data Science



Unsupervised learning of probability distribution of word sequences in a language by predicting each word within its sentence context in a large corpus, has proven to be useful to create models and…
July 12th 2019
Towards Data Science



Attention — the simple idea of focussing on salient parts of input by taking a weighted average of them, has proven to be the key factor in a wide class of neural net models. Multihead attention in…
August 19th 2019
Towards Data Science



Transformer architecture models, particularly BERT, has demonstrated to be quite effective in many NLP tasks by just fine tuning a model that was pretrained in an unsupervised manner on a large…
June 26th 2019
Towards Data Science



One of the roadblocks to entity recognition for any entity type other than person, location, organization, disease, gene, drugs, and species is the absence of labeled training data. BERT offers a…
May 20th 2019
Medium



The most natural/intuitive way to represent words when they are input to a language model (or any NLP task model) is to just represent words as they are — as a single unit. For example, if we are…
March 26th 2019
Medium



If we have a six-faced dice and we have no upfront information about it (that is, is it a biased dice?), then On the other hand if we have some upfront knowledge that the dice has a bias (i.e. it has…
March 24th 2019
Medium



Word embeddings are essentially vector representations of words, that are typically learnt by an unsupervised model when fed with large amounts of text as input (e.g. Wikipedia, science, new articles…
June 29th 2019
Analytics Vidhya



Entropy of a probability distribution is the average “element of surprise” or amount of information when drawing from (or sampling) the probability distribution. Lets consider a probability…
March 24th 2019
Medium



If we are asked to look at the three animals below and say which one is more of a cat than a dog, most of us would agree that If we want a neural net based model to do the same thing (we have gotten…
March 23rd 2019
Medium



Many of the current state-of-art models for supervised NLP tasks are models pre-trained on language modeling (which is an unsupervised task), and then fine tuned (supervised) with labeled data…
March 23rd 2019
Medium



BERT is the current state-of-art model for many NLP tasks. BERT output which is essentially context sensitive word vectors, has been used for downstream tasks like classification and NER. This is…
March 23rd 2019
Medium
Contact Us

Cambridge (HQ)

One Main Street, Suite 400
East Arcade, 4th Floor
Cambridge, MA 02142

Bangalore

2nd Floor, Indiqube Golf View Homes,
3rd Cross Road, S R Layout,
Wind Tunnel Road,
Murugesh Palaya,
Bengaluru 560017

Toronto

111 Peter St
Toronto, ON M5V 2G9, Canada

Rochester

18 3rd Street S.W. Suite #201
Rochester, MN 55902

©2020 nference, Inc. All rights reserved.