Tensorflow-gpu 설치 및 Jupyter 등록 on Windows

  • Windows, Cuda 9.0, cudnn 7.4, Anaconda는 설치되어 있는 환경에서 진행합니다.
  • Anacomda prompt를 실행합니다.
  • 새 가상환경을 만들기 위해 아래를 입력합니다. tf는 가상환경의 이름이고 python version은 3.6으로 했습니다.
    • conda create -n tf pip python=3.6
  • 가상환경을 활성화시킵니다.
    • activate tf or source activate tf
  • pip를 업그레이드 시킵니다.
    • python -m pip install --upgrade pip
  • tensorflow-gpu를 설치합니다.
    • pip install --ignore-installed --upgrade tensorflow-gpu
  • Jupyter에 등록하기 위해 아래 명령을 입력합니다.
    • conda install notebook ipykernel
    • python -m ipykernel install --user --name tf --display-name "Tensorflow"
  • 환경 확인 : conda info --envs
  • conda 환경 삭제 : conda remove --name ENV_NAME --all
  • Jupyter에서 커널 삭제 : jupyter kernelspec uninstall tf

'머신러닝 > 기타' 카테고리의 다른 글

[PyTorch] PyTorch 설치 및 Jupyter 등록  (0) 2018.11.11

PyTorch 설치 및 Jupyter 등록

  • 환경은 Windows 10, Anaconda를 사용하고 있습니다.

  • conda create -y -n pytorch ipykernel

  • activate pytorch

  • PyTorch 링크를 보고 자신한테 맞는 환경을 골라 명령어를 입력한다.

  • conda install pytorch cuda90 -c pytorch

  • pip install torchvision

  • 설치가 완료되면 예제 코드를 다운받아 실행 시킨다.

  • python tensor_tutorial.py

  • 제대로 실행되면 pytorch 설치가 완료된 것이다.

  • 주피터에 등록을 하기 위해선 다음 명령을 입력한다.

    • python -m ipykernel install --user --name pytorch --display-name "PyTorch"

tensor_tutorial.py

# -*- coding: utf-8 -*-
"""
What is PyTorch?
================

It’s a Python-based scientific computing package targeted at two sets of
audiences:

-  A replacement for NumPy to use the power of GPUs
-  a deep learning research platform that provides maximum flexibility
   and speed

Getting Started
---------------

Tensors
^^^^^^^

Tensors are similar to NumPy’s ndarrays, with the addition being that
Tensors can also be used on a GPU to accelerate computing.
"""

from __future__ import print_function
import torch

###############################################################
# Construct a 5x3 matrix, uninitialized:

x = torch.empty(5, 3)
print(x)

###############################################################
# Construct a randomly initialized matrix:

x = torch.rand(5, 3)
print(x)

###############################################################
# Construct a matrix filled zeros and of dtype long:

x = torch.zeros(5, 3, dtype=torch.long)
print(x)

###############################################################
# Construct a tensor directly from data:

x = torch.tensor([5.5, 3])
print(x)

###############################################################
# or create a tensor based on an existing tensor. These methods
# will reuse properties of the input tensor, e.g. dtype, unless
# new values are provided by user

x = x.new_ones(5, 3, dtype=torch.double)      # new_* methods take in sizes
print(x)

x = torch.randn_like(x, dtype=torch.float)    # override dtype!
print(x)                                      # result has the same size

###############################################################
# Get its size:

print(x.size())

###############################################################
# .. note::
#     ``torch.Size`` is in fact a tuple, so it supports all tuple operations.
#
# Operations
# ^^^^^^^^^^
# There are multiple syntaxes for operations. In the following
# example, we will take a look at the addition operation.
#
# Addition: syntax 1
y = torch.rand(5, 3)
print(x + y)

###############################################################
# Addition: syntax 2

print(torch.add(x, y))

###############################################################
# Addition: providing an output tensor as argument
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)

###############################################################
# Addition: in-place

# adds x to y
y.add_(x)
print(y)

###############################################################
# .. note::
#     Any operation that mutates a tensor in-place is post-fixed with an ``_``.
#     For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.
#
# You can use standard NumPy-like indexing with all bells and whistles!

print(x[:, 1])

###############################################################
# Resizing: If you want to resize/reshape tensor, you can use ``torch.view``:
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8)  # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())

###############################################################
# If you have a one element tensor, use ``.item()`` to get the value as a
# Python number
x = torch.randn(1)
print(x)
print(x.item())

###############################################################
# **Read later:**
#
#
#   100+ Tensor operations, including transposing, indexing, slicing,
#   mathematical operations, linear algebra, random numbers, etc.,
#   are described
#   `here <http://pytorch.org/docs/torch>`_.
#
# NumPy Bridge
# ------------
#
# Converting a Torch Tensor to a NumPy array and vice versa is a breeze.
#
# The Torch Tensor and NumPy array will share their underlying memory
# locations, and changing one will change the other.
#
# Converting a Torch Tensor to a NumPy Array
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

a = torch.ones(5)
print(a)

###############################################################
#

b = a.numpy()
print(b)

###############################################################
# See how the numpy array changed in value.

a.add_(1)
print(a)
print(b)

###############################################################
# Converting NumPy Array to Torch Tensor
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# See how changing the np array changed the Torch Tensor automatically

import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)

###############################################################
# All the Tensors on the CPU except a CharTensor support converting to
# NumPy and back.
#
# CUDA Tensors
# ------------
#
# Tensors can be moved onto any device using the ``.to`` method.

# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
    device = torch.device("cuda")          # a CUDA device object
    y = torch.ones_like(x, device=device)  # directly create a tensor on GPU
    x = x.to(device)                       # or just use strings ``.to("cuda")``
    z = x + y
    print(z)
    print(z.to("cpu", torch.double))       # ``.to`` can also change dtype together!

+ Recent posts