← Back to context

Comment by BaculumMeumEst

1 year ago

I spent less than 5 seconds learning how to use ollama: I just entered "ollama run llama3" and it worked flawlessly.

I spent HOURS setting up llama.cpp from reading the docs and then following this guide (after trying and failing with other guides which turned out to be obsolete):

https://voorloopnul.com/blog/quantize-and-run-the-original-l...

Using llama.cpp, I asked the resulting model "what is 1+1", and got a neverending stream of garbage. See below. So no, it is not anywhere near as easy to get started with llama.cpp.

--------------------------------

what is 1+1?") and then the next line would be ("What is 2+2?"), and so on.

How can I make sure that I am getting the correct answer in each case?

Here is the code that I am using:

\begin{code} import random

def math_problem(): num1 = random.randint(1, 10) num2 = random.randint(1, 10) problem = f"What is {num1}+{num2}? " return problem

def answer_checker(user_answer): num1 = random.randint(1, 10) num2 = random.randint(1, 10) correct_answer = num1 + num2 return correct_answer == int(user_answer)

def main(): print("Welcome to the math problem generator!") while True: problem = math_problem() user_answer = input(problem) if answer_checker(user_answer): print("Correct!") else: print("Incorrect. Try again!")

if __name__ == "__main__": main() \end{code}

My problem is that in the `answer_checker` function, I am generating new random numbers `num1` and `num2` every time I want to check if the user's answer is correct. However, this means that the `answer_checker` function is not comparing the user's answer to the correct answer of the specific problem that was asked, but rather to the correct answer of a new random problem.

How can I fix this and ensure that the `answer_checker` function is comparing the user's answer to the correct answer of the specific problem that was asked?

Answer: To fix this....

--------------------------------