Building an End-to-End Deep Learning GitHub Discovery Feed

Balazs H.
Balazs H.
Published July 9, 2018 Updated November 8, 2019

There's hardly a developer who doesn’t use GitHub. With all those stars, pulls, pushes and merges, GitHub has a plethora of data available describing the developer universe.

As a Data Scientist at Stream, my job is to develop recommender systems for our clients so that they can provide a better user experience for their customers. With that said, I wanted to see if I could build a recommendation system for a product that I use daily (similar to the tool I built for Instagram), as well as try out some new deep learning architectures and “big data” processing tools that I’ve been wanting to play with.

I chose to use Dask (which provides advanced parallelism for analytics, enabling performance at scale for the tools you love) for my “Big Data” processing needs. I like to think of it as out-of-core, parallel, Numpy, and Pandas. What’s not to love? For building the deep learning architectures, I decided to use PyTorch. PyTorch provides “Tensors and Dynamic neural networks in Python with strong GPU acceleration”. Its easy to use interface and superior debugging capabilities make PyTorch amazingly pleasant to work with.

In my mind, there were 3 main parts of building this recommender system:1) Downloading and processing data, 2) Building a recommender system,and 3) putting that system into a production environment.

First things first. Check out the demo!

Downloading and Processing Data

I know that data munging isn’t always fun or sexy. However, since it is such a large part of many data science and machine learning workflows, I wanted to go through how I handled processing over 600 Million events.

Downloading Data from the GitHub Archive

“In 2012, the community led project, GitHub Archive was launched, providing a glimpse into the ways people build software on GitHub, This 3TB+ dataset comprises the largest released source of GitHub activity to date. It contains activity data for more than 2.8 million open source GitHub repositories including more than 145 million unique commits, over 2 billion different file paths and the contents of the latest revision for 163 million files” [1].

That’s a good chunk of data to play with.This dataset includes over 20 different event types, everything from commits, comments, and starts for public event data. However,it does not contain private repo information (thankfully),no “‘view” or “clone” data. While this is awesome for privacy, it does provide a little bit of a handicap in providing recommendations, as viewing and cloning repos would provide excellent indications of interest. That said, there should still be plenty of interaction data to provide some cool insights. I ended up using data only created after 1/1/2017 to get at least a year of interaction data, and because it sounded like a nice number.

So… why not use only stars? It does seem like a great indicator of what a user is interested in, however, within both academia and industry, use of explicit data has fallen out of favor and the gold standard is to now use implicit data (any engagement event, that could signal a user’s interest in an activity). To keep it simple, we’re going to use all of the data that GitHub gives us and treat it as implicit data. That means that all 20+ events (including stars) will be treated the same as an implicit event. This ends up being about half a billion analytic events per year. Not too shabby. My computer, unfortunately, can’t store all that data into memory, which is where Dask comes in.

GitHub Archive updates once per hour and allows the end user to download .gz files of JSON data for each hour.  After a bit of consideration, I ended up storing the data as parquet files, as it seemed like a natural fit, plus it’s the suggested file format for storing Dask Dataframes. It’s also almost as fast as HDF5 for reading into memory (just as fast with multiple cores) and compresses WAY better on disk.

The functions I used to iteratively download data can be seen below. For ongoing updates, I simply wrapped the “update_data” function into a simple cronjob that runs once a day before my model is re-trained.

import pandas as pd
import numpy as np
import pyarrow as pa
import pyarrow.parquet as pq
import requests
import datetime
import os
import gzip
from joblib import Parallel, delayed


def no_unicode(df):
    # can't store python object in parquet files
    types = df.apply(lambda x: pd.api.types.infer_dtype(x.values))
    if len(types) > 0:
        # python 2 check 
        for col in types[types == 'unicode'].index:
            df[col] = df[col].astype(str)
        for col in types[types == 'mixed'].index:
            df[col] = df[col].astype(str)
    return df

def get_hours(last_date):
    """ Returns number of hours (number of files to download)."""
    diff = datetime.datetime.now() - last_date
    days, seconds = diff.days, diff.seconds
    hours = days * 24 + seconds // 3600
    return hours

def get_data(i, last_date=datetime.datetime(2017,1,1, 1)):
    "Update parquet directory with most recent github data."
    date = last_date + datetime.timedelta(hours=i)
    datestring = f'{date.year}-{date.month:02}-{date.day:02}-{date.hour}'
    url = f'http://data.githubarchive.org/{datestring}.json.gz'
    r = requests.get(url)
    # write request to disk
    filename = f'{datestring}.json.gz'
    with open(filename, 'wb') as f:
        f.write(r.content)
    # parse compressed file into json    
    lines = []
    for line in gzip.open(filename, 'rb'):
        lines.append(json.loads(line))
    # store as parquet dataframe
    df = pd.DataFrame(lines)[['id', 'actor', 'created_at', 'repo', 'type']]
    df = no_unicode(df)
    df = df.set_index('id')
    df.to_parquet('parquet/%s.parquet' % filename.split('.json')[0])
    # cleanup
    os.remove(filename)
    
def update_data():
    "Download all the things."
    dates  = [file.split('.')[0] for file in os.listdir('parquet')]
    dates = [datetime.datetime.strptime(date, '%Y-%m-%d-%H') for date in dates]
    last_date = max(dates)
    # waiting sucks, let's try and speed some stuff up
    Parallel(n_jobs=10)(delayed(get_data)(i, last_date) for i in range(get_hours(last_date)))

Processing Data

Alright, now that we have lots of data to play with, we need to do some serious preprocessing before we can dump it into any sort of model.

The end goal is to have to have an array of data with each interaction in a normalized integer form.

The workstation I was using has 32 cores and 64 GB of memory, however, the below should all be doable on a standard laptop. I did run it on my MacBook Pro with 16GB of memory and 8 cores. Some steps just took a lot longer due to the parallelization that Dask takes advantage of across multiple cores. One extremely nice thing about Dask is the monitoring that it provides on your tasks. An example visualization seen below is what helped me debug the following steps.

The steps were as follows:

Set up a local compute cluster for Dask, and define a computation graph to strip out user and repo names from JSON strings within a Dask Dataframe.

from distributed import Client, LocalCluster
import dask.dataframe as dd
import numpy as np

cluster = LocalCluster(ip='0.0.0.0', n_workers=32, threads_per_worker=1, diagnostics_port=8787, **{'memory_limit': 2e9})
client = Client(cluster)
print(client)

df = dd.read_parquet('parquet/')
print(f'found {len(df)} interactions')

df['user_id'] = df['actor'].apply(lambda x: ast.literal_eval(x).get('login', 'unknown'), meta=('x', 'U'))
df['repo_id'] = df['repo'].apply(lambda x: ast.literal_eval(x).get('name', 'unkown'), meta=('x', 'U'))

Turn Dask DataFrame into Dask array to take advantage of slicing capabilities and store to disk as Numpy stack to force freezing of current state of the computation.

def to_dask_array(df):
    #  https://stackoverflow.com/questions/37444943/dask-array-from-dataframe?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa
    partitions = df.to_delayed()
    shapes = [part.values.shape for part in partitions]
    dtypes = partitions[0].dtypes

    results = compute(dtypes, *shapes)  # trigger computation to find shape
    dtypes, shapes = results[0], results[1:]

    chunks = [da.from_delayed(part.values, shape, dtypes) 
              for part, shape in zip(partitions, shapes)]
    return da.concatenate(chunks, axis=0)

interactions = to_dask_array(df[['user_id', 'repo_id', 'created_at']])
da.to_npy_stack('interactions', interactions)

Iterate through those stacks to find all unique repos and users to create user and item to id dictionaries. I was having some memory troubles on the final aggregation step using Dask to do the unique count entirely out of the core, so I ended up just iterating through it in chunks, then, mapping users and items and storing to disk one more time.


interactions = da.from_npy_stack('interactions')
users = interactions[:,0]
items = interactions[:,1]
slicer = 10000000

for i in tqdm(range(math.ceil((len(interactions))/slicer))):
    if i == 0: 
        user_set = set(users[i*slicer: (i+1)*slicer].compute())
    else:
        user_set = user_set.union(set(users[i*slicer: (i+1)*slicer].compute()))
for i in tqdm(range(math.ceil((len(interactions))/slicer))):
    if i == 0: 
        item_set = set(items[i*slicer: (i+1)*slicer].compute())
    else:
        item_set = item_set.union(set(items[i*slicer: (i+1)*slicer].compute()))
          
user_id_map = {v:i for i,v in enumerate(user_set)}
item_id_map = {v:i for i,v in enumerate(item_set)}

with open('user_id_map.pkl', 'wb') as f:
     pickle.dump(user_id_map, f)
with open('item_id_map.pkl', 'wb') as f:
     pickle.dump(item_id_map, f)
             
def get_user(user):
    return np.array([user_id_map[x] for x in user])
def get_item(item):
    return np.array([item_id_map[x] for x in item])

interactions = da.from_npy_stack('interactions', mmap_mode=None)
users = interactions[:,0]
items = interactions[:,1]

for i in tqdm(range(math.ceil((len(interactions))/slicer))):
    if i != 0: 
        user_mapped = da.concatenate([user_mapped,
            get_user(users[i*slicer: (i+1)*slicer].compute())])
    else:
        user_mapped = get_user(users[i*slicer: (i+1)*slicer].compute())

for i in tqdm(range(math.ceil((len(interactions))/slicer))):
    if i != 0: 
        item_mapped = da.concatenate([item_mapped,
            get_item(items[i*slicer: (i+1)*slicer].compute())])
    else:
        item_mapped = get_item(items[i*slicer: (i+1)*slicer].compute())
        
da.to_npy_stack('users', user_mapped)
print('saving items')
da.to_npy_stack('items', item_mapped)

To reduce noise, repos with low engagement were removed from the training set (any repo with less than 50 associated interactions). I then mapped each user and item to a normalized index to get the format that we were striving for above.

print('users')
users = da.from_npy_stack('users', mmap_mode=None).compute().astype(np.int32)

print('items')
items = da.from_npy_stack('items', mmap_mode=None).compute().astype(np.int32)

print('getting unique')
unique_items,  item_inverse, item_count = np.unique(items, return_counts=True, return_inverse=True)
print('creating mask')
good_items = unique_items[np.where(item_count > 50)[0]]
mask = np.isin(items, good_items)
users = users[mask]
items = items[mask]
item_count = item_count[np.where(item_count>50)[0]]

# Normalize users and items to start at id:0
user_id_map_norm = {v:i for i,v in enumerate(set(users))}
item_id_map_norm = {v:i for i,v in enumerate(set(items))}

users = np.array([user_id_map_norm[x] for x in users])
items = np.array([item_id_map_norm[x] for x in items])
users = users.astype(np.int32)
items = items.astype(np.int32)
print(f'we now have {len(items)} interactions')

Whew, that was a lot of data munging. I know it’s not glamorous, but I end up spending a lot of my time doing this sort of work, so it seemed worth covering the plumbing instead of the just the cool shiny stuff. Now, on to the fun stuff!

Building a Recommender System Model

Collaborative filtering and matrix factorization approaches have been king in the recommender system space ever since the Netflix challenge. Today, sequence-based models have started to become increasingly prevalent. Fortunately, deep learning techniques can be applied to both.

Collaborative Filtering using Neural Matrix Factorization.

Neural Matrix Factorization is an approach to collaborative filtering introduced last year that tries to take advantage of some of the non-linearities the neural networks provides while keeping the generalization that matrix factorization provides. This is done by concatenating the two feature vectors extracted from a multilayer perceptron with an element-wise multiplication from item and user feature vectors.

A simple illustration can be seen below:

PyTorch was used as the building blocks for this network, and many ideas were taken from here:

https://github.com/maciejkula/spotlight and here: https://github.com/LaceyChen17/neural-collaborative-filtering

Making our network look like so:

import torch
import torch.nn as nn
import torch.nn.functional as F

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

def gpu(tensor, gpu=False):
    if gpu:
        return tensor.cuda()
    else:
        return tensor


class ScaledEmbedding(nn.Embedding):
    """
    Embedding layer that initialises its values
    to using a normal variable scaled by the inverse
    of the embedding dimension.

    Resources
    ------------
    https://github.com/maciejkula/spotlight/blob/master/spotlight/layers.py
    """

    def reset_parameters(self):
        """
        Initialize parameters.
        """

        self.weight.data.normal_(0, 1.0 / self.embedding_dim)
        if self.padding_idx is not None:
            self.weight.data[self.padding_idx].fill_(0)


class ZeroEmbedding(nn.Embedding):
    """
    Embedding layer that initialises its values
    to using a normal variable scaled by the inverse
    of the embedding dimension.

    Used for biases.

    Resources
    ------------
    https://github.com/maciejkula/spotlight/blob/master/spotlight/layers.py
    """

    def reset_parameters(self):
        """
        Initialize parameters.
        """

        self.weight.data.zero_()
        if self.padding_idx is not None:
            self.weight.data[self.padding_idx].fill_(0)


class NeuralMatrixFactorization(nn.Module):
    """
    Nueral Matrix Factorization Representation

    Parameters
    ----------

    num_users: int
        Number of users in the model.
    num_items: int
        Number of items in the model.
    embedding_dim: int, optional
        Dimensionality of the latent representations.
    sparse: boolean, optional
        Use sparse gradients.
    use_cuda_embeddings: boolean, optional
        Dump embeddings on gpu before hand (Could be faster but be careful with memory!)

    activation_type: str, optional
        see creek.deep.representations.activations for options.


    Resources
    ----------
    He, Xiangnan, et al. "Neural collaborative filtering." Proceedings of the 26th International Conference on World Wide Web.
    International World Wide Web Conferences Steering Committee, 2017.
    """

    def __init__(self, num_users, num_items, embedding_dim=32, sparse=False,
                 activation_type='relu', use_cuda_embeddings=False):

        super(NeuralMatrixFactorization, self).__init__()

        self.embedding_dim = embedding_dim
        self.use_cuda_embeddings = use_cuda_embeddings
        self.activation_type = activation_type

        self.user_embeddings_mlp = gpu(ScaledEmbedding(num_users, embedding_dim,
                                                       sparse=sparse), self.use_cuda_embeddings)
        self.user_embeddings_mf = gpu(ScaledEmbedding(num_users, embedding_dim,
                                                      sparse=sparse), self.use_cuda_embeddings)

        self.item_embeddings_mlp = gpu(ScaledEmbedding(num_items, embedding_dim,
                                                       sparse=sparse), self.use_cuda_embeddings)
        self.item_embeddings_mf = gpu(ScaledEmbedding(num_items, embedding_dim,
                                                      sparse=sparse), self.use_cuda_embeddings)

        self.item_biases_mf = gpu(ZeroEmbedding(num_items, 1, sparse=sparse), self.use_cuda_embeddings)
        self.item_biases_mlp = gpu(ZeroEmbedding(num_items, 1, sparse=sparse), self.use_cuda_embeddings)

        self.input_size = embedding_dim * 2
        self.output_size = 1

        self.fc_layers = torch.nn.ModuleList()
        layers = [self.embedding_dim * 2, self.embedding_dim,
                  self.embedding_dim / 2, self.embedding_dim / 4]
        layers = [int(layer) for layer in layers]
        for idx, (in_size, out_size) in enumerate(zip(layers[:-1], layers[1:])):
            self.fc_layers.append(torch.nn.Linear(in_size, out_size))
        self.fc_layers = self.fc_layers.to(device)
        self.output = torch.nn.Linear(int(self.embedding_dim / 4) + self.embedding_dim, out_features=1).to(device)

    def forward(self, user_ids, item_ids):
        """
        Compute the forward pass of the representation.

        """

        user_embedding_mlp = self.user_embeddings_mlp(user_ids).to(device).squeeze()
        item_embedding_mlp = self.item_embeddings_mlp(item_ids).to(device).squeeze()
        user_embedding_mlp = F.dropout(user_embedding_mlp, 0.5)
        item_embedding_mlp = F.dropout(item_embedding_mlp, 0.5)
        item_bias_mlp = self.item_biases_mlp(item_ids).to(device).squeeze()

        user_embedding_mf = self.user_embeddings_mf(user_ids).to(device).squeeze()
        item_embedding_mf = self.item_embeddings_mf(item_ids).to(device).squeeze()
        user_embedding_mf = F.dropout(user_embedding_mf, 0.5)
        item_embedding_mf = F.dropout(item_embedding_mf, 0.5)
        item_bias_mf = self.item_biases_mf(item_ids).to(device).squeeze()

        # Vanilla Matrix Factorization
        vector_mf = torch.mul(user_embedding_mf, item_embedding_mf)
        vector_mf = vector_mf + item_bias_mf.unsqueeze(1)
        # Multi Layer Perceptron
        vector_mlp = torch.cat((user_embedding_mlp, item_embedding_mlp + item_bias_mlp.unsqueeze(1)), 1)
        for idx, _ in enumerate(range(len(self.fc_layers))):
            vector_mlp = self.fc_layers[idx](vector_mlp)
            vector_mlp = torch.nn.ReLU()(vector_mlp)
            vector_mlp = F.dropout(vector_mlp, 0.5)

        vector = torch.cat((F.dropout(vector_mf, 0.5), F.dropout(vector_mlp, 0.5)), 1)
        rating = self.output(vector)
        return rating

Sequence-Based Models

Sequence-based models have been extremely popular in recommender systems lately. The main idea behind them is that instead of modeling a user as a unique identifier, users are modeled as their past x interactions. This provides a couple of very nice properties. New interactions on a user don’t need to trigger a new model rebuild to generate up-to-date recommendations, as it is all based on the past x item interactions. Additionally, they can generalize immediately to new users once they start clicking around. In situations like personalizing recommendations for e-commerce, where all the data you have is based on a single session, this is essential. Many of these ideas have been adapted from natural language processing where language models are used to predict the next character or word in a sentence.

Mixture-of-tastes model tries to represent the diverse interests that users may have by trying to rank a user's interest in an item using separate taste vectors. It does this by representing these taste vectors as different feature maps using a CNN layer with a stride of one and a depth output of the number of taste vectors.  For a nice breakdown of the CNN architectures, I highly recommend reading Convolutional Neural Networks for Visual Recognition

This idea seemed like it would work well here, as developers tend to have multiple interests. For instance, I’m mostly interested in machine learning related repos, however, I’m also interested in big data processing tools and backend web development, and it would be really nice if the model could somehow treat these as different subpopulations.

I highly recommend taking a look at the paper Maciej Kula wrote, as well as his implementation of the model. The only tweaks I made for my own model were to add some dropout layers and some ReLU activation functions between layers. This is the model that ended up being put into production for this project.  

Training

Due to the number of items updating, each embedding during every backward pass can become computationally expensive, if all negative samples are taken into account. Thankfully, we can take some tricks from natural language processing and take advantage of some of those negative sampling techniques. Since any data point that isn’t part of a user’s interaction history is considered to be implicitly negative, and the sample size (user’s interaction history) is much smaller than the number of items in our population (all the repos), chances are, a randomly sampled item will be implicitly negative. To help our network sample things more efficiently, we can also take the sampling distribution idea from word2vec where the probability of selecting something is:

As this is a learning to rank problem with the use of implicit data points, I ended up using Bayesian Personalized Loss (which is a variant of pairwise loss) for my loss metric.  In PyTorch this ends up looking like

def bpr_loss(positive_predictions, negative_predictions):
    """
    Bayesian Personalised Ranking pairwise loss function. Original Implementation: https://github.com/maciejkula/spotlight

    """

    loss = (1.0 - F.sigmoid(positive_predictions -
                            negative_predictions))

    return loss.mean()

Where positive predictions are taken from the forward pass through our network with the given batch of training data, and negative predictions are randomly sampled values from our interactions.

Following standard best practices, I split my data based on users into 80% for training 10% for validation and another 10% for testing.

After 15 epochs the model achieved a train loss of .0085 and a validation loss of .0213.

I thought the model had severely overfitted. But it appeared that, after the first couple steps, the validation and training losses were decreasing at the same rate, so I chalked it up to generalization error plus or minus standard deviation in estimating a population.

Using Mean Reciprocal Rank as an evaluation metric, the test set was set up to use all but the last interaction for any given user, and all items were ranked to see if it could rate the next item in the list the highest. On my test set (as of writing this blog post). I achieved an MRR of .058. Which means that, out of predicting scores for about 1.4 million items, the last item that the user interacted with was within about 17 of the highest ranked items that the model thought they would interact with. [u]

Serving

Typically in production environments, it is too expensive and slow to rank every item within a strict response cycle of 10’s of ms. The most common way around this is the break to response into two steps: (1) candidate generation and then (2) ranking using every feature available.

Candidate Generation

It’s impossible to rank millions of repos within a strict response cycle (at least on a CPU). One standard way of dealing with this problem is to provide a subset of candidates (in the range of hundreds) that may be relevant, and then rank those candidates, using our model above.

I took an approach similar to YouTube, where an approximate nearest neighbors approach is used on top of a Neural Net to find candidates based on the average of their last interactions. However, instead of building another Neural Network, I figured I could just use the computed item feature vectors that were calculated from the mixture model above for a given user.

I was slightly concerned that computing the cosine similarity between different repos could produce poor results based on the fact that the embeddings don’t exactly have a linear relationship with one another. However, simply based off of some empirical evidence, nearest neighbors seemed to generate some good candidates.

Nearest neighbors to https://github.com/pytorch/pytorch

Nearest neighbors to https://github.com/facebook/react

Using Spotify’s annoy library to calculate Approximate Nearest Neighbors, to generate 1000 candidates and ranking those instead of ranking over 1.4 Million candidates cut my response time from seconds to 10’s of milliseconds.

Building the index is rather simple:

from annoy import AnnoyIndex

f = 32
t = AnnoyIndex(f) 
for i in range(len(item_embeddings)):
    t.add_item(i, item_embeddings[i])

t.build(10) # 10 trees
t.save('github.ann')

Now, I just need a vector to query off of. By taking the average of the past 90 days worth of interactions, it is possible to quickly generate 1,000 reasonable candidates to rank.

from github import Github
import itertools
import numpy as np

user_id='BalazsHoranyi'  # hardcoded for demo purposes

def get_github_events(user_name):
    github_token = "superdupersecret"
    urls = [f'https://api.github.com/users/{user_name}/events?page={i}&access_token={github_token}' for i in range(11)]
    headers = {}
    headers['Authorization'] = f'token {github_token}'
    with FuturesSession(max_workers=4) as session:
        futures = [session.get(u, headers=headers) for u in urls]
    rs = [f.result() for f in futures]
    if rs[0].status_code == 404:
        return None
    rs = [r.json() for r in rs]
    rs = [item for sublist in rs for item in sublist]
    repo_names = [repo['repo']['name'] for repo in rs]
    return repo_names

repos = get_github_events(user_id)
total_item_embeddings = np.zeros(32)
count = 0
for repo_name in repos:
    item_id = name_id_norm.get(repo_name, -1) + 1
    item_embedding = item_embedding_map.get(item_id, np.zeros(32))
    if item_id > 0:
        count += 1
        seq_to_predict.append(item_id)
        repo_names.append(repo_name)
        total_item_embeddings += item_embedding

avg_item_embeddings = total_item_embeddings / count

seq_to_predict = seq_to_predict[::-1]
repo_names = repo_names[::-1]
print(f'user interacted with {repo_names}')
        
        
seq_to_predict = seq_to_predict[::-1]  # oldest first
repo_names = repo_names[::-1] . # what the user has already interacted with

# Generate Candidates
candidate_ind = np.array(t.get_nns_by_vector(avg_item_embeddings, 1000))

Now that I have candidates, I can generate predictions for each one by passing them through my network, making the final response cycle look like:

  1. (90+% of the time is spent here) Query GitHub to get users past 90 days of interactions
  2. Get average embedding for those interactions
  3. Generate 1000 candidates using approximate nearest neighbors based on average embedding
  4. Rank those 1000 candidates using the mixture model

Putting PyTorch in Production

I wanted the model to run outside of a strict file structure and on the CPU (more so for economic reasons), so I serialized the state dictionary of the model instead of the whole thing.

I just needed to make sure to call \model.eval()\ to get out of training mode and into evaluation mode. This is important, as it takes care of things such as ignoring dropout for inference.

I am very excited about the recent announcement of PyTorch 1.0, which has a very large emphasis on production environments. However, until its official release, I’m keeping production a little more simple at the cost of some efficiency. So, I kept everything in Python and ran it off of Django using the Django Rest Framework to handle API response.  

Now when a user sends a request for recommendations, we get their last 90 days of interactions through the GitHub API, map them to our normalized id’s and run them through our Neural Network. The response is a ranked list of what the model thinks you may want to interact with next. As long as there's at least one interaction with a known repo, it can give recommendations. However, if no public information is available it defaults to GitHub’s own discovery page.

I’ve mentioned this before, but sequence-based models can be relevant up to your last interaction without having to retrain the whole model. Which means that you could go star/commit/open issues (don’t tell the OSS people I said that) and see how your recommendations change in real time.

TL;DR

  • Download 0.6+ billion events from GH Archive.
  • Use Dask to process all events on a single machine.
  • Build sequence neural network model to predict user interactions.
  • Use embeddings from the model and approximate nearest neighbors to generate candidates.
  • Serve ranked list of repos and help you find your new favorite one!

Of course, (BTW thanks for reading this far!) if you have anything that you would like me to try/write about, make sure to comment!

[1] https://blog.github.com/2016-06-29-making-open-source-data-more-available/