1. Anuncie Aqui ! Entre em contato fdantas@4each.com.br

[Python] "No space left on device" when installing a 700-MB torch while space is plentiful...

Discussão em 'Python' iniciado por Stack, Setembro 11, 2024.

  1. Stack

    Stack Membro Participativo

    When installing openai-whisper with Python's pip I see the following error:

    (.venv) sh-5.2$ pip install openai-whisper
    Collecting openai-whisper
    Using cached openai-whisper-20231117.tar.gz (798 kB)
    Installing build dependencies ... done
    Getting requirements to build wheel ... done
    Preparing metadata (pyproject.toml) ... done
    Collecting triton<3,>=2.0.0 (from openai-whisper)
    Using cached triton-2.3.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.4 kB)
    Collecting numba (from openai-whisper)
    Using cached numba-0.60.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.7 kB)
    Collecting numpy (from openai-whisper)
    Using cached numpy-2.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
    Collecting torch (from openai-whisper)
    Using cached torch-2.4.1-cp311-cp311-manylinux1_x86_64.whl.metadata (26 kB)
    Collecting tqdm (from openai-whisper)
    Using cached tqdm-4.66.5-py3-none-any.whl.metadata (57 kB)
    Collecting more-itertools (from openai-whisper)
    Using cached more_itertools-10.5.0-py3-none-any.whl.metadata (36 kB)
    Collecting tiktoken (from openai-whisper)
    Using cached tiktoken-0.7.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.6 kB)
    Collecting filelock (from triton<3,>=2.0.0->openai-whisper)
    Using cached filelock-3.16.0-py3-none-any.whl.metadata (3.0 kB)
    Collecting llvmlite<0.44,>=0.43.0dev0 (from numba->openai-whisper)
    Using cached llvmlite-0.43.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.8 kB)
    Collecting numpy (from openai-whisper)
    Using cached numpy-2.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
    Collecting regex>=2022.1.18 (from tiktoken->openai-whisper)
    Using cached regex-2024.9.11-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (40 kB)
    Collecting requests>=2.26.0 (from tiktoken->openai-whisper)
    Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
    Collecting typing-extensions>=4.8.0 (from torch->openai-whisper)
    Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
    Collecting sympy (from torch->openai-whisper)
    Using cached sympy-1.13.2-py3-none-any.whl.metadata (12 kB)
    Collecting networkx (from torch->openai-whisper)
    Using cached networkx-3.3-py3-none-any.whl.metadata (5.1 kB)
    Collecting jinja2 (from torch->openai-whisper)
    Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
    Collecting fsspec (from torch->openai-whisper)
    Using cached fsspec-2024.9.0-py3-none-any.whl.metadata (11 kB)
    Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch->openai-whisper)
    Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
    Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch->openai-whisper)
    Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
    Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch->openai-whisper)
    Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
    Collecting nvidia-cudnn-cu12==9.1.0.70 (from torch->openai-whisper)
    Using cached nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
    Collecting nvidia-cublas-cu12==12.1.3.1 (from torch->openai-whisper)
    Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
    Collecting nvidia-cufft-cu12==11.0.2.54 (from torch->openai-whisper)
    Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
    Collecting nvidia-curand-cu12==10.3.2.106 (from torch->openai-whisper)
    Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
    Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch->openai-whisper)
    Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
    Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch->openai-whisper)
    Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
    Collecting nvidia-nccl-cu12==2.20.5 (from torch->openai-whisper)
    Using cached nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl.metadata (1.8 kB)
    Collecting nvidia-nvtx-cu12==12.1.105 (from torch->openai-whisper)
    Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.7 kB)
    INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while.
    Collecting torch (from openai-whisper)
    Using cached torch-2.4.0-cp311-cp311-manylinux1_x86_64.whl.metadata (26 kB)
    Using cached torch-2.3.1-cp311-cp311-manylinux1_x86_64.whl.metadata (26 kB)
    Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch->openai-whisper)
    Using cached nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
    Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch->openai-whisper)
    Using cached nvidia_nvjitlink_cu12-12.6.68-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
    Collecting charset-normalizer<4,>=2 (from requests>=2.26.0->tiktoken->openai-whisper)
    Using cached charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (33 kB)
    Collecting idna<4,>=2.5 (from requests>=2.26.0->tiktoken->openai-whisper)
    Using cached idna-3.8-py3-none-any.whl.metadata (9.9 kB)
    Collecting urllib3<3,>=1.21.1 (from requests>=2.26.0->tiktoken->openai-whisper)
    Using cached urllib3-2.2.2-py3-none-any.whl.metadata (6.4 kB)
    Collecting certifi>=2017.4.17 (from requests>=2.26.0->tiktoken->openai-whisper)
    Using cached certifi-2024.8.30-py3-none-any.whl.metadata (2.2 kB)
    Collecting MarkupSafe>=2.0 (from jinja2->torch->openai-whisper)
    Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
    Collecting mpmath<1.4,>=1.1.0 (from sympy->torch->openai-whisper)
    Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
    Using cached triton-2.3.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (168.1 MB)
    Using cached more_itertools-10.5.0-py3-none-any.whl (60 kB)
    Using cached numba-0.60.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.7 MB)
    Using cached numpy-2.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (19.5 MB)
    Using cached tiktoken-0.7.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
    Downloading torch-2.3.1-cp311-cp311-manylinux1_x86_64.whl (779.2 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━ 719.7/779.2 MB 8.0 MB/s eta 0:00:08
    ERROR: Exception:
    Traceback (most recent call last):
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 438, in _error_catcher
    yield
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 561, in read
    data = self._fp_read(amt) if not fp_closed else b""
    ^^^^^^^^^^^^^^^^^^
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 527, in _fp_read
    return self._fp.read(amt) if amt is not None else self._fp.read()
    ^^^^^^^^^^^^^^^^^^
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/filewrapper.py", line 102, in read
    self.__buf.write(data)
    File "/usr/lib/python3.11/tempfile.py", line 500, in func_wrapper
    return func(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^
    OSError: [Errno 28] No space left on device

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper
    status = run_func(*args)
    ^^^^^^^^^^^^^^^
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_internal/cli/req_command.py", line 245, in wrapper
    return func(self, options, args)
    ^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_internal/commands/install.py", line 377, in run
    requirement_set = resolver.resolve(
    ^^^^^^^^^^^^^^^^^
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 179, in resolve
    self.factory.preparer.prepare_linked_requirements_more(reqs)
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 552, in prepare_linked_requirements_more
    self._complete_partial_requirements(
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 467, in _complete_partial_requirements
    for link, (filepath, _) in batch_download:
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_internal/network/download.py", line 183, in __call__
    for chunk in chunks:
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_internal/cli/progress_bars.py", line 53, in _rich_progress_bar
    for chunk in iterable:
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_internal/network/utils.py", line 63, in response_chunks
    for chunk in response.raw.stream(
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 622, in stream
    data = self.read(amt=amt, decode_content=decode_content)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 560, in read
    with self._error_catcher():
    File "/usr/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
    File "/home/user/Documents/Transcribtion/.venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 455, in _error_catcher
    raise ProtocolError("Connection broken: %r" % e, e)
    pip._vendor.urllib3.exceptions.ProtocolError: ("Connection broken: OSError(28, 'No space left on device')", OSError(28, 'No space left on device'))


    I still have 141 GiB free on my hard drive. Tried with both with and without venv. What can I do?

    Continue reading...

Compartilhe esta Página