TEXT SCULPTOR ™ is a summary extraction program developed by Juicycode using the Transformers architecture and trained on the BART model.
With the rapid development of artificial intelligence and deep learning technology, significant breakthroughs have been made in the field of natural language processing. Text summarization, as an important natural language processing application, has the ability to distill a large amount of text information into concise points, thus saving reading time and improving the efficiency of information acquisition. In many fields, such as news reports, scientific research papers, corporate reports and other scenarios, text summarization technology has a wide range of application value and market demand.
The aim of this project is to design and implement a text summary extraction program based on a pre-trained BART model to achieve efficient and accurate English summary extraction by using Transformers architecture and fine-tuning training techniques. The project adopts a user-friendly interface written in PyQt5, which implements the functions of training management, history viewing and visualization, and model testing to meet the needs of different users in the process of text summary extraction. Meanwhile, the project adopts multi-threading technology to avoid interface blocking and ensure the program responds quickly.
TEXT SCULPTOR ™ Fine-tuned training based on summary extraction pre-trained models such as bart-large-cnn, bart-base, etc. to optimize model performance for specific usage scenarios. In the source code, a GPU (dedicated graphics card) is used to train the model. It is known that training cannot be completed when the dedicated memory of the GPU is lower than equal to 6GB.
Version | Update |
2023.1 | Implementing programs to add training and testing interfaces |



