Comment by brightstar18
1 day ago
Product seems cool. But can you help me understand if what you are doing is different from the following: > you put a prompt > Plexe glorifies that prompt into a bigger prompt with more specific instructions (augmented by schema definitions, intent and whatnot) > plug it into the provided model/LLM > .predict() gives me the output (which was heavily guardrailed by the glorified prompt in the step 2)
Great question, and yes, it's quite different: Plexe generates code for a pipeline that processes your dataset (analysis, feature engineering, etc) and trains a custom ML model for your use case. When you call `.predict()`, it is that trained custom model that provides the response, not an LLM. The model is also hosted for you, and Plexe takes care of MLOps things like letting you retrain the model on new data, evaluating the model performance for you, etc. Using custom specialised models is generally more effective, faster and cheaper compared to running your predictions through an LLM when you have a lot of data specific to your business.