Almost all machine learning (ML) is based on representing examples using intrinsic features. When there are multiple related ML problems (tasks), it is possible to transform these features into extrinsic features by first training ML models on other tasks and letting them each make predictions for each example of the new task, yielding a novel representation. We call this transformational ML (TML). TML is very closely related to, and synergistic with, transfer learning, multi-task learning, and stacking. TML is applicable to improving any non-linear ML method. The models in this repository are for tests performed using random forests on a large scale gene expression problem. They are for the 978 landmark genes in the Library of Integrated Network-Based Cellular Signatures.
Funding
Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation
Engineering and Physical Sciences Research Council projects Robot Chemist and Action on Cancer
Alan Turing Institute project Spatial Learning: Applications in Structure Based Drug Design