pyKT: A Python Library to Benchmark Deep Learning based Knowledge Tracing Models

  • By pyKT Team
  • Last update: Jan 8, 2023
  • Comments: 5

PyKT

Downloads GitHub Issues Documentation

PyKT is a python library build upon PyTorch to train deep learning based knowledge tracing models. The library consists of a standardized set of integrated data preprocessing procedures on 7 popular datasets across different domains, 5 detailed prediction scenarios, 10 frequently compared DLKT approaches for transparent and extensive experiments.

Installation

Use the following command to install PyKT:

Create conda envirment.

conda create --name=pykt python=3.7.5
source activate pykt
pip install -U pykt-toolkit -i  https://pypi.python.org/simple 

References

Projects

  1. https://github.com/hcnoh/knowledge-tracing-collection-pytorch
  2. https://github.com/arshadshk/SAKT-pytorch
  3. https://github.com/shalini1194/SAKT
  4. https://github.com/arshadshk/SAINT-pytorch
  5. https://github.com/Shivanandmn/SAINT_plus-Knowledge-Tracing-
  6. https://github.com/arghosh/AKT
  7. https://github.com/JSLBen/Knowledge-Query-Network-for-Knowledge-Tracing
  8. https://github.com/xiaopengguo/ATKT
  9. https://github.com/jhljx/GKT

Papers

  1. DKT: Deep knowledge tracing
  2. DKT+: Addressing two problems in deep knowledge tracing via prediction-consistent regularization
  3. DKT-Forget: Augmenting knowledge tracing by considering forgetting behavior
  4. KQN: Knowledge query network for knowledge tracing: How knowledge interacts with skills
  5. DKVMN: Dynamic key-value memory networks for knowledge tracing
  6. ATKT: Enhancing Knowledge Tracing via Adversarial Training
  7. GKT: Graph-based knowledge tracing: modeling student proficiency using graph neural network
  8. SAKT: A self-attentive model for knowledge tracing
  9. SAINT: Towards an appropriate query, key, and value computation for knowledge tracing
  10. AKT: Context-aware attentive knowledge tracing

Github

https://github.com/pykt-team/pykt-toolkit

Comments(5)

  • 1

    akt模型测试集预测时报错

    尊敬的作者团队,你们好: 我在尝试参加AAAI竞赛,在使用akt模型预测测试集时报出如下错误:

    Namespace(atkt_pad=0, save_dir='saved_model/peiyou_akt_qid_saved_model_3407_0_0.2_256_512_8_4_0.0001_0_1', test_filename='pykt_test.csv', train_ratio=0.5, use_pred=0, use_wandb=0)
    Start predicting model: akt, embtype: qid, save_dir: saved_model/peiyou_akt_qid_saved_model_3407_0_0.2_256_512_8_4_0.0001_0_1, dataset_name: peiyou
    model_config: {'dropout': 0.2, 'd_model': 256, 'd_ff': 512, 'num_attn_heads': 8, 'n_blocks': 4}
    data_config: {'dpath': '../data/peiyou', 'num_q': 7652, 'num_c': 865, 'input_type': ['questions', 'concepts'], 'max_concepts': 6, 'min_seq_len': 3, 'maxlen': 200, 'emb_path': '', 'train_valid_file': 'train_valid_sequences.csv', 'folds': [0, 1, 2, 3, 4]}
    Start predict use_pred: False, ratio: 0.5...
    Traceback (most recent call last):
      File "wandb_eval.py", line 74, in <module>
        main(params)
      File "wandb_eval.py", line 55, in main
        dfinal = evaluate_splitpred_question(model, data_config, testf, model_name, save_test_path, use_pred, ratio, atkt_pad)
      File "/media/sda2/yueying/pykt-toolkit/pykt/models/evaluate_model.py", line 611, in evaluate_splitpred_question
        qidxs, ctrues, cpreds = predict_each_group2(dtotal, dcur, dforget, curdforget, is_repeat, qidx, uid, idx, model_name, model, t, end, fout, atkt_pad)
      File "/media/sda2/yueying/pykt-toolkit/pykt/models/evaluate_model.py", line 1034, in predict_each_group2
        y, reg_loss = model(ccc.long(), ccr.long(), ccq.long())
      File "/media/sda1/yueying/miniconda3/pkgs/envs/pykt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
        return forward_call(*input, **kwargs)
      File "/media/sda2/yueying/pykt-toolkit/pykt/models/akt.py", line 84, in forward
        q_embed_data, qa_embed_data = self.base_emb(q_data, target)
      File "/media/sda2/yueying/pykt-toolkit/pykt/models/akt.py", line 77, in base_emb
        qa_embed_data = self.qa_embed(target)+q_embed_data
      File "/media/sda1/yueying/miniconda3/pkgs/envs/pykt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
        return forward_call(*input, **kwargs)
      File "/media/sda1/yueying/miniconda3/pkgs/envs/pykt/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward
        self.norm_type, self.scale_grad_by_freq, self.sparse)
      File "/media/sda1/yueying/miniconda3/pkgs/envs/pykt/lib/python3.7/site-packages/torch/nn/functional.py", line 2199, in embedding
        return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
    IndexError: index out of range in self
    

    希望作者能够提供一些错误产生的线索,感谢!

  • 2

    module 'pykt' or package 'pykt'?

    Hi, your pykt is a very nice project. But I got the following traceback on my machine :

      File "**/pykt-toolkit/examples/wandb_train.py", line 71, in main
        (train_loader, valid_loader) = init_dataset4train(dataset_name, model_name, data_config, fold, batch_size)
    ValueError: too many values to unpack (expected 2)
    

    after my troubleshooted,I find we have a package named pykt,and a module named pykt too. The paths for both of them are as follows:

    • module pykt: **/pykt-toolkit/pykt
    • package pykt :**/envs/pykt/lib/python3.7/site-packages/pykt

    According to the description of the Modules section of the python website (https://docs.python.org/3/tutorial/modules.html),from-import will choise package pykt frist.

    So, have you encountered the same problem, or can you give me some suggestions to solve it? thank you very much!

  • 3

    multi-skill exercises?

    Hi, can I use the multi-skill exercises of myself dataset? For example:i have a exercises list $E={e_1,e_2,...,e_n}$,$e_ i$ have skill ${s_j,s_k,s_h}$ (j!=k&&k!=h&&j!=h) whether support my dataset from other source(sush as :codeforces.com,luogu.com.cn)?

  • 4

    target response issue in AKT model

    Hello, I want to ask your opinion on the AKT model with regard to the reason why that model performs best in your delicate framework. (https://arxiv.org/abs/2206.11460)

    image

    the image above is the figure of AKT model represented in the paper

    qa_embed_diff_data = self.qa_embed_diff( target) # f_(ct,rt) or #h_rt (qt, rt)差异向量 if self.separate_qa: qa_embed_data = qa_embed_data + pid_embed_data *
    qa_embed_diff_data # uq* f_(ct,rt) + e_(ct,rt) else: qa_embed_data = qa_embed_data + pid_embed_data *
    (qa_embed_diff_data+q_embed_diff_data) # + uq *(h_rt+d_ct) # (q-response emb diff + question emb diff)

    and the code above is what you implemented at pykt/models/akt.py.

    I think you followed the right way as the paper's author described. The point is that I think AKT model has the best performance because it has a chance to know the target answers with "f(c_t, r_t) variation vector" (at the paper), which is "qa_embed_diff_data" (at your code).

    As a result, in my opinion, AKT has the best performance because of its already-known target issue.

    To resolve the issue, I suggest modifying Architecture forward function as the following code:

            else:  # dont peek current response
                pad_zero = torch.zeros(batch_size, 1, x.size(-1)).to(self.device)
                q = x
                k = torch.cat([pad_zero, x[:, :-1, :]], dim=1)
                v = torch.cat([pad_zero, y[:, :-1, :]], dim=1)
                x = block(mask=0, query=q, key=k, values=v, apply_pos=True) # True: +FFN+残差+laynorm 非第一层与0~t-1的的q的attention, 对应图中Knowledge Retriever
                # mask=0,不能看到当前的r
                flag_first = True
    

    thank you for your attention :)

  • 5

    What is the configs/kt_config.json?

    Hello, Thanks for the nice framework for knowledge tracing. I'm also impressed with the delicate experiments you've done on the paper(https://arxiv.org/abs/2206.11460)

    Now I'm working on reproducing your Table 2 results shown on the paper, with the assist2015 dataset you've also shared. And, I got confused about getting the best hyperparameters as you mentioned at A.3 Hyperparameter Search Details of Representative DLKT Baselines.

    So, there are two different settings on your code, which is,

    1. values in configs/kt_config.json
    2. default values of every argument in wandb/wandb_{model_name}_train.py Which values do I have to follow to get the high result close to table 2?

    Thanks.