tf_alloc
Simpliying GPU allocation for Tensorflow
- Developer: korkite (Junseo Ko)
Installation
pip install tf-alloc
โญ๏ธ Why tf_alloc? Problems?
- Compare to pytorch, tensorflow allocate all GPU memory to single training.
- However, it is too much waste because, some training does not use whole GPU memory.
- To solve this problem, TF engineers use two methods.
- Limit to use only single GPU
- Limit the use of only a certain percentage of GPUs.
- However, these methods require complex code and memory management.
โญ๏ธ Why tf_alloc? How to solve?
tf_alloc simplfy and automate GPU allocation using two methods.
โญ๏ธ How to allocate?
- Before using tf_alloc, you have to install tensorflow fits for your environment.
- This library does not install specific tensorflow version.
# On the top of the code
from tf_alloc import allocate as talloc
talloc(gpu=1, percentage=0.5)
import tensorflow as tf
""" your code"""
It is only code for allocating GPU in certain percentage.
Parameters:
- gpu = which gpu you want to use (if you have two gpu than [0, 1] is possible)
- percentage = the percentage of memory usage on single gpu. 1.0 for maximum use.
โญ๏ธ Additional Function.
GET GPU Objects
gpu_objs = get_gpu_objects()
- To use this code, you can get gpu objects that contains gpu information.
- You can set GPU backend by using this function.
GET CURRENT STATE
Defualt
current(
gpu_id = False,
total_memory=False,
used = False,
free = False,
percentage_of_use = False,
percentage_of_free = False,
)
- You can use this functions to see current GPU state and possible maximum allocation percentage.
- Without any parameters, than it only visualize possible maximum allocation percentage.
- It is cmd line visualizer. It doesn't return values.
Parameters
- gpu_id = visualize the gpu id number
- total_memory = visualize the total memory of GPU
- used = visualize the used memory of GPU
- free = visualize the free memory of GPU
- percentage_of_used = visualize the percentage of used memory of GPU
- percentage_of_free = visualize the percentage of free memory of GPU
ํ๊ตญ์ด๋ ๊ฐ๋จํ๊ฒ!
์ค์น
pip install tf-alloc
๋ฌธ์ ์ ์:
- ํ ์ํ๋ก์ฐ๋ ํ์ดํ ์น์ ๋ค๋ฅด๊ฒ ํ๋ จ์ GPU๋ฅผ ์ ๋ถ ํ ๋นํด๋ฒ๋ฆฝ๋๋ค.
- ๊ทธ๋ฌ๋ ์ค์ ๋ก GPU๋ฅผ ๋ชจ๋ ์ฌ์ฉํ์ง ์๊ธฐ ๋๋ฌธ์ ํฐ ๋ญ๋น๊ฐ ๋ฐ์ํฉ๋๋ค.
- ์ด๋ฅผ ๋ง๊ธฐ ์ํด ๋๊ฐ์ง ๋ฐฉ๋ฒ์ด ์ฌ์ฉ๋๋๋ฐ
- GPU๋ฅผ 1๊ฐ๋ง ์ฐ๋๋ก ์ ํํ๊ธฐ
- GPU์์ ํน์ ๋ฉ๋ชจ๋ฆฌ๋งํผ๋ง ์ฌ์ฉํ๋๋ก ์ ํํ๊ธฐ
- ์ด ๋๊ฐ์ง ์ ๋๋ค. ๊ทธ๋ฌ๋ ์ด ๋ฐฉ๋ฒ์ ์ํด์ ๋ณต์กํ ์ฝ๋์ ๋ฉ๋ชจ๋ฆฌ ๊ด๋ฆฌ๊ฐ ํ์ํฉ๋๋ค.
ํด๊ฒฐ์ฑ :
- ์ด๊ฒ์ ํด๊ฒฐํ๊ธฐ ์ํด ์๋์ผ๋ก ๋ช๋ฒ GPU๋ฅผ ์ผ๋งํผ๋ง ํ ๋นํ ์ง ์ ํด์ฃผ๋ ์ฝ๋๋ฅผ ๋ง๋ค์์ต๋๋ค.
- ํจ์ ํ๋๋ง ์ฌ์ฉํ๋ฉด ๋ฉ๋๋ค.
# On the top of the code
from tf_alloc import allocate as talloc
talloc(gpu=1, percentage=0.5)
import tensorflow as tf
""" your code"""
- ๋งจ์์ tf_alloc์์ allocateํจ์๋ฅผ ๋ถ๋ฌ๋ค๊ฐ gpuํ๋ผ๋ฏธํฐ์ percentage ํ๋ผ๋ฏธํฐ๋ฅผ ์ฃผ์ด ํธ์ถํฉ๋๋ค.
- ๊ทธ๋ฌ๋ฉด ์๋์ผ๋ก ๋ช๋ฒ์ GPU๋ฅผ ์ผ๋งํผ์ ๋น์จ๋ก ์ฌ์ฉํ ์ง ์ ํด์ ํ ๋นํฉ๋๋ค.
- ๋งค์ฐ ์ฝ์ต๋๋ค.
ํ๋ผ๋ฏธํฐ ์ค๋ช
-
gpu = ๋ช๋ฒ GPU๋ฅผ ์ธ ๊ฒ์ธ์ง GPU์ ์์ด๋๋ฅผ ๋ฃ์ด์ค๋๋ค. (๋ง์ฝ gpu๊ฐ 2๊ฐ ์๋ค๋ฉด 0, 1 ์ด ์์ด๋๊ฐ ๋ฉ๋๋ค.)
-
percentage = ์ ํํ GPU๋ฅผ ๋ช์ ๋น์จ๋ก ์ธ๊ฑด์ง ์ ํด์ค๋๋ค. (1.0์ ๋ฃ์ผ๋ฉด ํด๋น GPU๋ฅผ ์ ๋ถ ์๋๋ค)
-
๋ง์ฝ percentage๊ฐ ๋ช์ธ์ง ๋ชจ๋ฅธ๋ค๋ฉด 0์์ 1 ์ฌ์ด์ ๊ฐ์ ๋ฃ์ด์ ํ ๋นํด๋ณด๋ฉด ์ต๋ ์ฌ์ฉ๊ฐ๋ฅ๋์ด ์ผ๋งํผ์ด๋ผ๊ณ ์๋ฌ๋ฅผ ์ถ๋ ฅํ๋๊น ๊ฑฑ์ ์์ด ์ฌ์ฉํ์๋ฉด ๋ฉ๋๋ค. ๋ค๋ฅธ ํ๋ จ์ ๋ฐฉํด๋ฅผ ์ฃผ์ง ์๊ธฐ ๋๋ฌธ์, nvidia-smi๋ฅผ ์ณ๊ฐ๋ฉด์ ํ ๋น์ ํ๋ ๊ฒ๋ณด๋ค ๋งค์ฐ ์์ ์ ์ ๋๋ค.
-
ํต์ฌ๊ธฐ๋ฅ๋ง ํ๊ตญ์ด๋ก ์จ ๋์๊ณ , ๋ค๋ฅธ ๊ธฐ๋ฅ์ ์๋ฌธ๋ฒ์ ์ ํ์ธํด๋ณด์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค.