Meet Rajesh Gor

Techstructive Weekly #6

Week #6

This week was all about diving deep into the details of my projects. After weeks of building, I spent this week debugging and testing everything, pushing things to the next level. My goal was to have a working demo by the end of the week, which I started on Wednesday—and I’m happy to report that I almost hit the mark. There’s always room for improvement, but the testing phase went as planned, and the hypothesis is holding up.

I’m pumped for next week at work, where I’ll be wrapping up some key integrations. Once again, LLMs are proving to be game-changers.

Quote of the week

“The secret of life is to fall seven times and to get up eight times.”

— Paulo Coelho

I failed at the start of my week, I had poor results from the experimentation that I did, that was me limiting myself to the things I knew and got very rigid in the approach, but just a single change of thought, and guidance from my manager got me straight from nowhere to somewhere I wanted to be, where he knew I should be. This is the result of showing up eight times, after failing seven times. Keep showing up, even if you fail, fail harder, or go the rock bottom, you will learn or discover something new for sure and that’s growth, that’s winning, much better than becoming invincible.

Read

Wrote

  • Safely using Maps in Golang: Differences in declaration and initialization
    I said in the previous newsletter that I would write this article and I did it in the previous weekend. The title I thought versus what is actually there is quite different, I had a very specific problem with url.Values getting nil pointer access if not initiated, but I found out the underlying type is a map in golang, So this learning became applicable to maps in general. This is the reason I like to write blog posts, I get a deeper understanding of the problems and solutions than what I thought I knew of.

Watched

  • Retrieval Augmented Generation:

    For anyone new to LLMs and RAG, this video might be helpful to understand what exactly is a RAG method to prompt to the LLM. Nicely explained.

  • AWS CEO says the end of programmers is near:

    This is such a lame thing, how could a 40-year-old industry wipe out in such a small amount of time, it will take at least decades to wipe out the gigantic pile of code.

  • Boot.dev Season 1 Final Episode: Origins of Boot.dev

    This is an incredible journey of the backend developer gamified platform. This serves as an inspiration that a side project can become a full-time hustle.

Learnt

  • url.Values in golang is a map under the hood, that’s so obvious but never thought about it. Now it is so clear that those will be a map of strings and list of strings.

  • Python joblib to parallelize functions: This is one of the libraries and ideas that I learned in my daily work to get things done. I used it basically to call a function with a list of arguments (different) multiple calls at the same time to save time. So, if a single function call takes 10 seconds and I have 3 function calls to make, sequentially it would take 30 seconds but with parallel calls, those could be done with just over 10 seconds.

  • Using set.update to append a list of keys of a dictionary: Let’s say I have a list of dictionaries of some sort, I want to keep track of all the unique keys in those dictionaries, the dirty and the long way would be this:

    tables = [
        {"id": 1, "name": "Alice", "age": 25, "email": "alice@example.com"},
        {"id": 2, "name": "Bob", "city": "New York", "phone": "555-1234"},
        {"id": 3, "country": "USA", "zip": "12345", "email": "charlie@example.com"},
        {"id": 4, "name": "Dana", "state": "California", "city": "San Francisco"}
    ]
    unique_keys = set()
    for table in tables:
    for key in table:
    unique_keys.add(key)
    print(unique_keys)
    {'id', 'name', 'age', 'email', 'city', 'phone', 'country', 'zip', 'state'}
    

    A more cleaner way would be this:

    for table in tables:
    unique_keys.update(table.keys())
    
  • OpenAI Function Call is not good compared to non-functional prompts. By functional prompt I mean the structure of the response is provided as an object and the LLM has to respond adhering to that structure, this looks good, but not sure why it goofs up the actual text provided to it. Whereas with the normal(non-functional) prompt the response is much better, as we have more control over the things that can be added, validations, and specific structure. I am surprised that normal prompts can also give responses in a structured way that is too consistent.

For more news, follow the Hackernewsletter and for daily developer articles, join daily.dev

That’s it from this week, I hope you did well this week, and have a happy week and weekend ahead!

Thank you for reading, let’s catch up in the next week.

Happy Coding :)