• 0 Posts
  • 22 Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle


  • The biggest issue with accepting free housing and other perks is the unspoken cost. What are the expectations in return?

    I’ve spent time in taiwan and mainland china, as well as many other asian countries, china has its citizenry riled up in rampant nationalism thanks to the isolation of the people and propaganda. The propaganda of taiwan (and hong kong) being part of china is deeply rooted in the state sponsored group-think and is not going away any time soon. I will say the people I met, while angry when speaking about taiwan, did not seem to wish the people there any ill will, rather they seemed upset about the very idea of taiwan being separate.

    That’s all to say, the political situation is complex. However the real question here is multifold. 1) is it against your chosen moral framework to capitulate and live in china and 2) if it is, what are your morals worth to you, what specific monetary amount would get you to renounce your views.

    Parts of china are beautiful, the culture is lovely especially in rural areas, and living there could genuinely be nice. However your country is currently presenting the world’s largest bullseye and while your presence won’t swing the final result, if you feel you have a moral responsibility to stay and speak up, then do so!








  • 0x01@lemmy.mltoTechnology@lemmy.worldWhy LLMs can't really build software
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    7
    ·
    28 days ago

    I use it extensively daily.

    It cannot step through code right now, so true debugging is not something you use it for. Most of the time the llm will take the junior engineer approach of “guess and check” unless you explicitly give it better guidance.

    My process is generally to start with unit tests and type definitions, then a large multipage prompt for every segment of the app the llm will be tasked with. Then I’ll make a snapshot of the code, give the tool access to the markdown prompt, and validate its work. When there are failures and the project has extensive unit tests it generally follows the same pattern of “I see that this failure should be added to the unit tests” which it does and then re-executes them during iterative development.

    If tests are not available or if it is not something directly accessible to the tool then it will generally rely on logs either directly generated or provided by the user.

    My role these days is to provide long well thought out prompts, verify the integrity of the code after every commit, and generally just kind of treat the llm as a reckless junior dev. Sometimes junior devs can surprise you, like yesterday I was very surprised by a one shot result: asking for a mobile rn app for taking my rambling voice recordings and summarize them into prompts, it was immediately remarkably successful and now I’ve been walking around mic’d up to generate prompts.


  • Processing (cpu) doesn’t really matter as much as gpu, and generally the constraint is gpu memory on consumer grade machines. Processing via nvidia chips has become the standard, which is a huge part of why they have become the single most valuable company on the planet, though you can use cpu you’ll find the performance almost unbearably slow.

    Ollama is the easiest option, but you can also use option and pytorch (executorch), vllm, etc

    You can download your model through huggingface or sometimes directly from the lab’s website

    It’s worth learning the technical side but ollama genuinely does an excellent job and takes a ton off your plate