# Utilities & Extras Helpers and utility functions

### fix_random_seed function

Set the random seed for `random`

, `numpy.random`

and `cupy.random`

(if
available). Should be called at the top of a file or function.

`Example````
from thinc.api import fix_random_seed
fix_random_seed(0)
```

Argument | Type | Description |
---|---|---|

`seed` | `int` | The seed. Defaults to `0` . |

### prefer_gpu function

Allocate data and perform operations on GPU, if available. If data has already been allocated on CPU, it will not be moved. Ideally, this function should be called right after importing Thinc.

`Example````
from thinc.api import prefer_gpu
is_gpu = prefer_gpu()
```

Argument | Type | Description |
---|---|---|

`gpu_id` | `int` | Device index to select. Defaults to `0` . |

RETURNS | `bool` | Whether the GPU was activated. |

### require_gpu function

Allocate data and perform operations on GPU. Will raise an error if no GPU is available. If data has already been allocated on CPU, it will not be moved. Ideally, this function should be called right after importing Thinc.

`Example````
from thinc.api import require_gpu
require_gpu()
```

Argument | Type | Description |
---|---|---|

RETURNS | `bool` | `True` . |

### set_active_gpu function

Set the current GPU device for `cupy`

and `torch`

(if available).

`Example````
from thinc.api import set_active_gpu
set_active_gpu(0)
```

Argument | Type | Description |
---|---|---|

`gpu_id` | `int` | Device index to select. |

RETURNS | `cupy.cuda.Device` | The device. |

### use_pytorch_for_gpu_memory function

Route GPU memory allocation via PyTorch. This is recommended for using PyTorch
and `cupy`

together, as otherwise OOM errors can occur when there’s available
memory sitting in the other library’s pool. We’d like to support routing
TensorFlow memory allocation via PyTorch as well (or vice versa), but do not
currently have an implementation for it.

`Example````
from thinc.api import prefer_gpu, use_pytorch_for_gpu_memory
if prefer_gpu():
use_pytorch_for_gpu_memory()
```

### use_tensorflow_for_gpu_memory function

Route GPU memory allocation via TensorFlow. This is recommended for using
TensorFlow and `cupy`

together, as otherwise OOM errors can occur when there’s
available memory sitting in the other library’s pool. We’d like to support
routing PyTorch memory allocation via TensorFlow as well (or vice versa), but do
not currently have an implementation for it.

`Example````
from thinc.api import prefer_gpu, use_tensorflow_for_gpu_memory
if prefer_gpu():
use_tensorflow_for_gpu_memory()
```

### get_width function

Infer the width of a batch of data, which could be any of: an n-dimensional array (use the shape) or a sequence of arrays (use the shape of the first element).

Argument | Type | Description |
---|---|---|

`X` | `Union[ArrayXd, Ragged, Padded, Sequence[ArrayXd]]` | The array(s). |

keyword-only | ||

`dim` | `int` | Which dimension to get the size for. Defaults to `-1` . |

RETURNS | `int` | The array’s inferred width. |

### to_categorical function

Converts a class vector (integers) to binary class matrix. Based on
`keras.utils.to_categorical`

.

Argument | Type | Description |
---|---|---|

`Y` | `IntsXd` | Class vector to be converted into a matrix (integers from `0` to `n_classes` ). |

`n_classes` | `Optional[int]` | Total number of classes. |

RETURNS | `Floats2d` | A binary matrix representation of the input. The axis representing the classes is placed last. |

### xp2torch function

Convert a `numpy`

or `cupy`

tensor to a PyTorch tensor.

Argument | Type | Description |
---|---|---|

`xp_tensor` | `ArrayXd` | The tensor to convert. |

`requires_grad` | `bool` | Whether to backpropagate through the variable. |

RETURNS | `torch.Tensor` | The converted tensor. |

### torch2xp function

Convert a PyTorch tensor to a `numpy`

or `cupy`

tensor.

Argument | Type | Description |
---|---|---|

`torch_tensor` | `torch.Tensor` | The tensor to convert. |

RETURNS | `ArrayXd` | The converted tensor. |

### xp2tensorflow function

Convert a `numpy`

or `cupy`

tensor to a TensorFlow tensor.

Argument | Type | Description |
---|---|---|

`xp_tensor` | `ArrayXd` | The tensor to convert. |

`requires_grad` | `bool` | Whether to backpropagate through the variable. |

`as_variable` | `bool` | Convert the result to a `tensorflow.Variable` object. |

RETURNS | `tensorflow.Tensor` | The converted tensor. |

### tensorflow2xp function

Convert a TensorFlow tensor to a `numpy`

or `cupy`

tensor.

Argument | Type | Description |
---|---|---|

`tensorflow_tensor` | `tensorflow.Tensor` | The tensor to convert. |

RETURNS | `ArrayXd` | The converted tensor. |

### xp2mxnet function

Convert a `numpy`

or `cupy`

tensor to an MXNet tensor.

Argument | Type | Description |
---|---|---|

`xp_tensor` | `ArrayXd` | The tensor to convert. |

`requires_grad` | `bool` | Whether to backpropagate through the variable. |

RETURNS | `mx.nd.NDArray` | The converted tensor. |

### mxnet2xp function

Convert an MXNet tensor to a `numpy`

or `cupy`

tensor.

Argument | Type | Description |
---|---|---|

`mx_tensor` | `mx.nd.NDArray` | The tensor to convert. |

RETURNS | `ArrayXd` | The converted tensor. |

### Errors

Thinc uses the following custom errors:

Name | Description |
---|---|

`ConfigValidationError` | Raised if invalid config settings are encountered by `Config` or the `registry` , or if resolving and validating the referenced functions fails. |

`DataValidationError` | Raised if `Model.initialize` is called with sample input or output data that doesn’t match the expected input or output of the network, or leads to mismatched input or output in any of its layer. |