Are you sure you want to delete this access key?
Legend |
---|
DVC Managed File |
Git Managed File |
Metric |
Stage File |
External File |
Legend |
---|
DVC Managed File |
Git Managed File |
Metric |
Stage File |
External File |
This is a repo of the Natural Language to Machine Learning (NL2ML) project of the Laboratory of Methods for Big Data Analysis at Higher School of Economics (HSE LAMBDA).
The project's official repo is stored on GitLab (HSE LAMBDA repository) - https://gitlab.com/lambda-hse/nl2ml The project's full description is stored on Notion - https://www.notion.so/NL2ML-Corpus-1ed964c08eb049b383c73b9728c3a231 The project's experiments are stored on DAGsHub - https://dagshub.com/levin/source_code_classification
To build a model classifying a source code chunk and to specify where the detected class is exactly in the chunk (tag segmentation).
To build a model generating code by getting a short raw english task in as an input.
This repository contains instruments which the project's team has been using to label source code chunks with Knowledge Graph vertices and to train models to recognize these vertices in future. By the Knowledge Graph vertices we mean an elementary part of ML-pipeline. The current latest version of the Knowledge Graph contains the following high-level vertices: ['import', 'data_import', 'data_export', 'preprocessing', 'visualization', 'model', 'deep_learning_model', 'train' 'predict']
.
To download the project data and models:
dvc pull data
or dvc pull data
. Note: if you are failing on dvc pull [folder_to_pull]
, try dvc pull [folder_to_pull] --jobs 1
The instruments which we have been using to reach the project goals are: notebooks parsing from Kaggle API and Github API, data preparation, regex-labellig, training models, validation models, model weights/coefficients analysis, errors analysis, synonyms analysis.
nl2ml_notebook_parser.py - a script for parsing Kaggle notebooks and process them to JSON/CSV/Pandas.
bert_distances.ipynb - a notebook with BERT expiremints concerning sense of distance between BERT embeddings where input tokens were tokenized source code chunks.
bert_classifier.ipynb - a notebook with preprocessing and training BERT-pipeline.
regex.ipynb - a notebook with creating labels for code chunks with regex
logreg_classifier.ipynb - a notebook with training logistic regression model on the regex labels with tf-idf and analyzing the outputs
Comments vs commented code.ipynb - a notebook with a model distinguishing NL-comments from commented source code
github_dataset.ipynb - a notebook with opening github_dataset
predict_tag.ipynb - a notebook with predicting class label (tag) with any model
svm_classifier.ipynb - a notebook with training SVM (replaced by svm_train.py) and analyzing SVM outputs
svm_train.py - a script for training SVM model
Press p or to see the previous file or, n or to see the next file
This is a repo of the Natural Language to Machine Learning (NL2ML) project of the Laboratory of Methods for Big Data Analysis at Higher School of Economics (HSE LAMBDA).
https://www.notion.so/NL2ML-Corpus-1ed964c08eb049b383c73b9728c3a231Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?