Commit Graph

7 Commits (b9b3af8003df35e7094525f9aea3ca605a0da9f9)

Author SHA1 Message Date
Henry Tu e869e68559
Fix LTC lib_torch_mlir_ltc.so import error (#1283)
* Build LTC to _mlir_libs directory

* Update CMakeLists.txt
2022-08-25 18:25:01 -04:00
Henry Tu 47bb38d180 Reference Lazy Backend (#1045)
* Changed Example MLIR backend to Reference MLIR backend

* Moved reference_ltc_backend into csrc

* Merged sys_utils.h

* Renamed reference_ltc_backend to reference_lazy_backend

* Addressed review comments

* Update docs with new library name

* Removed _REFERENCE_LAZY_BACKEND from .gitignore

* Added reference_lazy_backend to the TorchMLIRPythonModules dependency list

Fixed typo in `ltc_examples.md`

Missed instance where `ltc_backend` was used instead of `lazy_backend`.
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim fb21c9e6cb Integrate Functionalization Pass (#998)
* Fix autogen build dir issue

* Got functionalization pass to compile

* Add slice/diagonal backwards functionalization

* Fix codegen invocation in CMakeLists.txt

* Add functionalization view ops

* Fix logsumexp out functionalization

* Fix ComputationPtr

* Blacklist new_empty op

* Add op comparison

* Remove unnecessary ops

Co-authored-by: Henry Tu <henry.tu@cerebras.net>
2022-07-30 09:40:02 -04:00
Henry Tu 0cee0dc978 Only import the LTC backend that's used (#939) 2022-07-30 09:40:02 -04:00
Henry Tu dfcc26556a Added e2e LTC tests (#916)
* Added e2e LTC Torch MLIR tests

* Fix seed for reproducability

* Check if computation is None before getting debug string

* Updated unit tests, and added numeric tests

* Print name of the model layer that fails numeric validation

* Run LTC e2e test with CI/CD

* Set seed in main function, instead of beginning of execution

* Add comment to specify number of digits of precision

* Fixed typo

* Remove tests for LTC example models

* Added LTC option to torchscript e2e

* Implement compile and run for LTC e2e test

* xfail all tests that use ops that aren't currently supported
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim d9aee0d7a7 E2E HuggingFace Bert using LTC Backend (#912)
* Update native function definitions

* Add ops to support bert lowering

- Add empty_strided and as_strided

- Restore zeros_like to op blacklist (Without this, tensors will be unintentionally created with a CPU device rather than lazy)

- Check for composite implicit ops and add device data IR

- Also fix codegen for functionalization

* Add autogen to CMakeList

* Remove PyTorch submodule

* Reduced BERT model size

* Print Mark Step status in Torch MLIR LTC debug string

* Apply fixes to work with latest upstream/main

- Pass importOptions into getMlirTypeFromTorchType during NodeImporter::importNode

  Without this, the tensor type created may have a mismatched type as ImportOptions may cause vtensor to be used instead of tensor

* Update shape inference functions

- Fixed compute_shape_native_batch_norm when mean and var are uninitialized

  Previously, the number of shapes returned would be <3 if either mean or val was didn't exist. Instead, we now initialize them with a vector matching the number of channels.

- Implemented compute_shape_mul

- Fixed bug in reshape shape inference error message

* Get MLIR backend more consistent with TS backend

- Remove LazyNativeFunctions::_unsafe_view from autogen

- Blacklist ops to make JIT graph more like output of TS backend

- Print graph when SSA value has mismatch of types and results

- Remove normalize_index from LazyShapeInference

- Fix seeds for LTC example models

* Update and clean up shape inference functions

- Prune shape inference functions

- Add shape inference function for GenerateSlice

- Add shape inference function for GenerateCopy

Co-authored-by: Henry Tu <henry.tu@cerebras.net>
2022-07-30 09:40:02 -04:00
Henry Tu de5b380143 Bert example and relevant shape inference functions (#831) 2022-07-30 09:40:02 -04:00