Skip to content

numcu._dist_ver#

numcu.lib#

Thin wrappers around numcu C++/CUDA module

check_cuvec#

[view source]

def check_cuvec(a, shape, dtype)

Asserts that CuVec a is of shape & dtype

check_similar#

[view source]

def check_similar(*arrays, allow_none=True)

Asserts that all arrays are CuVecs of the same shape & dtype

div#

[view source]

def div(numerator,
        divisor,
        default=FLOAT_MAX,
        output=None,
        dev_id=0,
        sync=True)

Elementwise output = numerator / divisor if divisor else default Args: numerator(ndarray): input. divisor(ndarray): input. default(float): value for zero-division errors. output(ndarray): pre-existing output memory. dev_id(int or bool): GPU index (False for CPU). sync(bool): whether to cudaDeviceSynchronize() after GPU operations.

mul#

[view source]

def mul(a, b, output=None, dev_id=0, sync=True)

Elementwise output = a * b Args: a(ndarray): input. b(ndarray): input. output(ndarray): pre-existing output memory. dev_id(int or bool): GPU index (False for CPU). sync(bool): whether to cudaDeviceSynchronize() after GPU operations.

add#

[view source]

def add(a, b, output=None, dev_id=0, sync=True)

Elementwise output = a + b Args: a(ndarray): input. b(ndarray): input. output(ndarray): pre-existing output memory. dev_id(int or bool): GPU index (False for CPU). sync(bool): whether to cudaDeviceSynchronize() after GPU operations.