1. Anuncie Aqui ! Entre em contato fdantas@4each.com.br

[Python] Model Loader errors in Text Gen Web UI`

Discussão em 'Python' iniciado por Stack, Outubro 6, 2024 às 19:12.

  1. Stack

    Stack Membro Participativo

    I have tried loading multiple models from HuggingFace using Text Gen Web UI, but no matter the model or the loader, I get the same "ModuleNotFoundError" for the loaders.

    Importantly, I am using an Intel i7 and not a GPU, but I have been able to run smaller models using different UI tools in the past.

    Steps I followed:

    1. Cloned Text Gen from Git Created a new environment in Anaconda.Navigator.
    2. Using Visual Studio Code opened Text Generation Web UI.
    3. I selected the environment's Python as the Python interpreter.
    4. Selected to activate the relevant conda environment and installed the requirements.txt file for Text Generation Web UI.
    5. Started the application using the one_click.py file that comes with Text Generation Web UI.

    Upon failing to get a model to load, I tried many things:

    1. Re-installing pip packages.
    2. Installing a different version of torch: pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
    3. Deleting the environment and starting over.
    4. Asking ChatGPT and CoPilot a million questions.

    The same errors appear over and over. It is driving me crazy!

    Examples (each of these have been installed, in the correct environment):


    ModuleNotFoundError: Failed to import 'autogptq'. Please install it manually following the instructions in the AutoGPTQ GitHub repository.

    ModuleNotFoundError: No module named 'exllamav2'

    OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\meta-llama_Llama-3.2-1B.

    Continue reading...

Compartilhe esta Página