Skip to content
View stephantul's full-sized avatar
๐ŸŒด
Busy planting trees
๐ŸŒด
Busy planting trees

Organizations

@clips

Block or report stephantul

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
stephantul/README.md

Hi there ๐Ÿ‘‹

I'm Stรฉphan Tulkens! I'm a computational linguistics/AI person. I am currently working as a machine learning engineer at Ecosia. I am one of the two founding members of The Minish Lab.

I got my Phd at CLiPS at the University of Antwerpen under the watchful eyes of Walter Daelemans (Computational Linguistics) and Dominiek Sandra (Psycholinguistics). The topic of my Phd was the way people process orthography during reading. You can find a copy here. Before that I studied computational linguistics (Ma), philosophy (Ba) and software engineering (Ba)

My goal is always to make things as fast and small as possible. I like it when simple models work well, and I love it when simple models get close in accuracy to big models. I do not believe absolute accuracy is a metric to be chased, and I think we should always be mindful of what a model computes or learns from the data.

Iโ€™m currently working on ๐Ÿƒโ€โ™‚๏ธ:

  • model2vec: a library for creating extremely fast sentence-transformers through distillation.
  • reach: a library for loading and working with word embeddings.

Other stuff I made (most of it from my Phd) ๐Ÿ•:

  • wordkit: a library for working with orthography
  • old20: calculate the orthographic levenshtein distance 20 metric.
  • metameric: fast interactive activation networks in numpy.
  • humumls: load the UMLS database into a mongodb instance. Fast!
  • dutchembeddings: word embeddings for dutch (back when this was a cool thing to do)

My research interests ๐Ÿค–:

  • Tokenizers, specifically subword tokenizers.
  • Embeddings, specifically static embeddings (so old-fashioned! ๐Ÿ’€), and how to combine these in meaningful ways.
  • String similarity, and how to compute it without using dynamic programming.

Contact:

Pinned Loading

  1. reach reach Public

    Load embeddings and featurize your sentences.

    Python 28 6

  2. clips/dutchembeddings clips/dutchembeddings Public

    Repository for the word embeddings experiments described in "Evaluating Unsupervised Dutch Word Embeddings as a Linguistic Resource", presented at LREC 2016.

    Python 82 14

  3. MinishLab/model2vec MinishLab/model2vec Public

    The Fastest State-of-the-Art Static Embeddings in the World

    Python 481 19