PROBLEM:
If time travelers from the future were to visit you, it would be difficult for them to quickly prove their authenticity.
SOLUTION:
Temporal passwords. At the start of each year, you devise a new password. You commit the password to memory, but you never write it down or divulge it to anyone until Dec. 31, when you submit it to the Temporal Password Registry, which publishes it and promotes its dissemination.
RESULT:
Prior to their visit, time travelers from the future can look up your temporal password in the registry for the year in which they plan to visit you. Their ability to communicate a password that you have not yet shared with anyone provides evidence that they are actually from the future.
PROMO:
Schools are functionally no different from part-time prisons. You must attend daily under penalty of law.
Many teachers are plain awful.
They rely on students being additionally taught by their parents. That forces parents to go to school with their children, which perpetuates the vicious cycle. The teacher recalibrates the class to the students who either understood everything the first time or had supplemental education and the rest languish.
Schools assign homework that is not easy to do when the student hasn’t fully grasped the concept. That burns time they could use to get better.
So... I am implementing Math common core in software. The first part is an automatic homework solver for math. Once we solved the student’s homework, we can teach them how to do it with generate problems. Crucially, there will be multiple perspectives and an ontology of topics do the student can backtrack to where they got lost in class weeks ago.
After we are good with math, we’ll do the same with English. It will probably not go too deep, but it will let students obtain the missing foundation of knowledge.
My theory is that most reasonable people stay off social media, so places like Twitter end up filled with unreasonable narcissists. At the same time, discussing politics on a semi-anonymous forum like Reddit is pointless, who cares if someone on the Internet is wrong. But maybe there’s a better way of communication, something new, that lets you talk with people you actually know.
Serious thought involves more than just collections and associations: mastery requires repetition, creativity requires serendipitous discovery, and productive output requires flow states. It’s also a matter of acknowledging the fact that “units of knowledge” do not exist on their own: all knowledge is embedded in context (or “deeply intertwingled”, in the words of Ted Nelson), and without context, metaphor, and nuance, we cannot form meaningful connections. By baking these attributes into the medium itself, it’s possible to build an information space that’s simple to explore, can surface information when you need it, can augment the mind’s naturally ability to form connections, and can get out of the way the rest of the time.
Replacing HTML/JS/CSS with a language called ALFI. It is stupidly simple in its design but still very powerful. Similarly to HTML you use it to create widgets, place them, and define their behavior. It is humanly readable like HTML but line-based instead of markup-based. Instead of nesting it uses references. This allows it to be streamed.
A big difference is that the language itself doesn't allow styling (like CSS), the downside being you get less flexibility but the upside being it will render correctly on any display with any resolution.
For this I have also written a new type of web browser called NAVI which takes ALFI code and produces (somewhat) beautiful widgets and renders them using OpenGL.
Source for both ALFI and NAVI: https://github.com/jezze/alfi
My own ALFI website: http://www.blunder.se/
You need NAVI to actually browse blunder.se properly. otherwise you will just see ALFI code. Also this is still very early so all features are not done yet.
The first scientific experiment was conducted by 5th century BC Pythagoreans. They wanted to show that the basis for musical consonance was math. From that, they inferred that harmony in math accounted for the harmony of the cosmos. This integration of math+physics was very forward thinking.
But, if we fast forward to the present, we still don't have a complete scientific explanation for the basis of consonance and dissonance. Really! To make my own contribution, I've been running psychophysical experiments to investigate why consonant chords that are mathematically slightly dissonant actually sound much better than chords with perfect mathematical consonance. I've been gathering data with sounds but also with haptic vibrations and with visual flicker frequencies. This multisensory approach is fun because it produces visible rhythmic entrainment in the brain, as seen with EEG. My goal is to contribute to a general theory of neural resonance and harmony in human experience.
Why does this matter? Happiness is great, but I'd argue that what we really want is personal and global harmony. Note that harmony isn't sameness, it is unity in variety -- the resolution of conflict and dissonance into an integrated wholeness. We want inner harmony with our selves, harmony in our relationships with others, harmony in society, harmony with technology and harmony with nature. Happiness is individualistic but harmony involves the pleasure of virtue. I hypothesize that harmony can help set a better objective function for the future of humanity.
Harmony was also the objective function for the first deep learning neural network, Paul Smolensky's Harmonium.
Finally, harmony is also a central theme in classical philosophy. The concept had a massive influence in the Italian Renaissance and in the English Scientific Revolution.
I recently put together a reader for understanding Plato's views on Harmony. Comments are welcome:
https://docs.google.com/document/d/1lqXpXgWI5YMBCz1O0gCmrEwz...
There's a sample at https://random-character-generator.com/
There's an unreleased version, which focuses more on how to portray characters, rather than just what they are. For example, instead of saying "energetic", they'll be pacing about a bit.
I might just pivot it into a story/plot tracker for writers, and use it to fill out the blanks rather than generating full characters from scratch. Where the community can add in their own templates and tropes. An author can decide that they have a character who is stoic, cynical, and sarcastic, and the tool will generate a background story, how to portray the character, what conflicts they get into with other characters.
I don't have a computer science background so encoding an compression are new to me, but Im a good hacker and I can quickly get things like open pose working. Im trying to complete this in the next 3 months. Wish me luck.
So I'm writing curricula that use computer programs as the primary teaching tool. One is for computer science, where the idea is that anyone who can read some python can pick up all the important ideas from a formal CS education without sitting through a year or more of preliminaries. Over time I'm planning to add smaller sections on more advanced topics.
The other curriculum is theoretical physics. There's already a good book that does classical mechanics [1] in scheme. I've hired some postdocs to learn scheme and code lessons in general relativity, statistical mechanics and so on. I do the lessons, solve the problems, and then we talk about what worked and what didn't. I work on this about ten hours a week. After a couple of years I should have knowledge roughly equivalent to an ABD physics grad student, plus teaching material that can take anyone else to the same level from modest beginnings.
I'm looking for collaborators on this project so don't be a stranger. Twitter/email is in my profile
[1] https://groups.csail.mit.edu/mac/users/gjs/6946/sicm-html/bo...
I'm building a "catalog" of architectures that you could use to create a complete cloud architecture on your AWS, GCP or Azure account in less than one minute.
So, for example, you could create a docker-based architecture with CI/CD, auto-scaling, zero downtime deployment, SSL, load-balancing, high availability and MongoDB in less than one minute in your own AWS account.
It's like Terraform with the user-friendliness of Heroku.
It's very hard because every providers have different APIs and concepts so you have to start from scratch for each.
I love working on it because cloud-computing may have so much impact in some organizations like biotech startups or NGOs.
A fountain code is an almost magical algorithm that can split a file of size n up into a (practically) infinite stream of blocks of size b, such that collecting any n/b blocks out of the stream can reconstruct the file.
Applied to p2p file sharing it effectively can eliminate rare pieces as well as the need to communicate which pieces people have. Related topic here is homomorphic hashing.
Unless I find something better before September my master thesis will be on this topic.
There's a EU directive instructing on how citizens should be able to identify online with eIDAS. In my country, you can use eIDAS to authenticate in basically any governmental agency portal, but you can't get any eIDAS enabled auth method as a citizen. The current way of authenticating is done via bank accounts or a paid extra mobile service that requires a non-prepaid mobile contract.
This is a relatively huge issue. First off, the Finnish government pays the banks for each auth any user does when they for example want to log into their medical records etc. It's a few million euros a year just for verifying the users.
There's also obviously issues with whom the banks serve, there has been some cases with them not taking foreigners or people with bad credit as customers, making it impossible for them to authenticate themselves.
The current EU directives also indirectly require that the banks should provide a bank customer the possibility to authenticate without needing to have a banking account (which costs money), but to my knowledge this still isn't possible. I pay around 20 euros a month just for the luxury of having an account, not everyone can afford that on top of other bills.
Auth services are not accessible for impaired users.
It's also basically impossible to manage who has essentially the power of attorney and over which matters, for how long etc. Either you have to give them your login info (good luck resetting your SSN) or try to use the services over the phone and somehow convince the other side that you have permission to manage things for another person.
There's no ways of authenticating who is using your accounts online and actually verify the users.
Basically, my idea is combining biometrics, PGP and having the government running the identity management themselves. This would have added benefits of basically enabling hashed throwaway addresses and info for use online while providing a free and accessible way of authenticating strongly online.
Specifically, I'm working with a local pro-refugee organization in a densely immigrant populated region in Spain. There's a complex chain of steps that you have to go through in order to acquire citizenship. Only people with access to good lawyers are able to deal with all the bureaucracy of the process, without mentioning other problems (missing obscure expiry dates that reset your process, language-related problems, local government workers not actually knowing/willing-fully ignoring migrants' rights...).
There's a good network of volunteer lawyers working on this issue, but its not scalable. I'm working on a platform that would allow migrants to solve their own situation, by crowdsourcing the knowledge of lawyers on a case-per-case basis and offering a simple interface in their language to track open processes & discovered the ones they need to go through and how.
As an abstraction for this, I've been thinking on how we could improve citizen/government communication. A small use case / example for this could be refugee camps. My previous experience here is that they are small, disconnected communities with a top-down type of organisation towards the camp organisers. It shouldn't be hard to provide real-time tools for connecting both, potentially leading to things like asking for their needs, managing their legal situation, or even allowing for voting & self-governing.
Step one is survival, basics of hunting, first aid and farming sort of stuff. That volume would end around homesteading and self-sufficient living.
Volume two would be establishing society in larger groups than a family unit. Things like job specialization (N roles for N people instead of 1/N of each role for each person), establishing trade (currency, weights & measures, supply chain), government (mostly what not to do, what to protect, how to adjudicate disagreement), public works ("roads are a good idea") and their ilk. Also medicine beyond first aid and basic care.
Volume three would be advanced STEM topics, getting from a functioning society to... more. Not even the sky's the limit. It should include blueprints for things we take for granted like refrigeration, telecommunication and birth control. It will include all the basics of physics, chemistry and biology required for smart people to fill in the gaps and launch a human to the moon and back.
I want to super-nerd-out about this, and publish it on Tyvek or something exotic so it'll last through decades of wear and tear (and water-logging and more), and include a ruler on the spine and it's own weight documented for reference.
Machine learning: what happens when we replace Euclidean metrics with p-adic ones? Distance is fundamental to so many algorithms (least squares regression; nearest neighbours; anything involving gradient descent). How do those algorithms behave over completely foreign metric spaces?
SOLUTION: The space doesn't lack for good research and policy recommendations, but it has historically lacked (on the right and left), non-screedy, nonpartisan voices that can be trusted when policymakers look for solutions. We're attempting to fill that space.
WHAT WE WANT: Ultimately? We want every major American city to work better for the people who live and work there. It'll look different from community to community and our job isn't about applying a cookie-cutter approach. Instead, we want to get a wider range of ready-to-implement tools in front of the decision makers, and educate engaged citizens that solutions exist.
Today's web is a collection of applications that largely provide a frontend for browsing data. The applications and data they contain are silos: there is no easy way to separate the data from functions and compute across datasets. Every application must (re)invent its own UI for querying and displaying data.
But if the web is actually a collection of datasets, why don't we have a web browser for consuming and interacting with arbitrary structured datasets?
We can model most popular sites (HN, Instagram, Twitter, Amazon etc) as a collection of hyperlinked JSON records. Let users adjust how these records are displayed. Provide a universal way to query and navigate any dataset and invoke associated functions (eg the upvote function for an HN post).
Full separation of data and functions instead of application silos is necessary to achieve general AI compute in the future.
Example: can you email Mark a summary of the top 5 most popular HN articles 3 days before our meeting?
The foundation is flowcharts, with support for individual layers distinguishing levels of abstraction, and scenarios for exploring use-cases. From there:
- Live data. We look at metrics on dashboards but it doesn't put into perspective how they relate to each other. Imagine seeing on your flowchart of servers, that one worker has an anomalous CPU reading, and you can click into that to see the individual readings of the running services on it. (rudimentary version: https://app.terrastruct.com/diagrams/1404897320)
- Automatic generation and sync of diagrams. Having access to sources like AWS account and version control to create and keep in sync diagrams of your infra, db schemas, UML classes, etc.
- Collaborative editing, seamless integrations with written documentation, linking directly to code where appropriate, version control, etc.
So much of software can be better understood visually. Still early on, if you're interested in learning more, https://terrastruct.com. And would love to chat (email in profile) with anyone with ideas.
I've been journaling my dreams for years and I'm working on an app that makes it easier to (visually) map them out & find patterns: https://oneironotes.com/
I like the idea of accessing other (inner) dimensions during sleep, like an explorer (an "oneironaut"). The problems to overcome are related to capturing and recollecting experiences that only take place in the mind. You asked about the weird stuff...
Which is why I started making Cleave: An application that lets users persist OS state as a "context" - saving and loading open applications, their windows, tabs, open files/documents and so on.
Started because of frequent multitasking heavy work with limited resources.
Made it because I wanted to switch between studying, working, reading, looking for an apartment, etc. without manually managing all states or consuming all resources.
Open Beta (macOS) as soon as I finish license verification and delta updates, but I keep getting sidetracked...
Which is to say, what is the relation between a sub-Peano logical system that can prove itself consistent, and is complete, thus bypassing the 2nd Incompleteness Theorems, and self-referential decision problems posed in the programming language that corresponds to that logical system?
My suspicion is that such a language could provide very strong guarantees on the auditability of self-modifying code.
[0] https://en.m.wikipedia.org/wiki/Self-verifying_theories
[1] https://www.semanticscholar.org/paper/Breaking-through-the-n...
It is known that Q takes the form Q=a(x,y)b(y,z)c(z,x) for some functions a,b,c to be determined by solving the system of equations:
P(x,y) = sum_z Q(x,y,z)
P(y,z) = sum_x Q(x,y,z)
P(z,x) = sum_y Q(x,y,z)
It's not clear there exists a general closed-form solution. Iterative algorithms are known. This type of problem comes up in a number of interesting contexts. For instance, testing for non-trivial multi-variable interactions in dynamical systems such as neural networks or spin networks, performing joins on probabilistic databases, constructing reduced models of probability distributions, and in some cooperative game theory problems.
Examples: https://www.princeton.edu/~wbialek/our_papers/schneidman+al_...
http://vldb.org/conf/1987/P071.PDF
https://doi.org/10.6028/jres.072b.019
https://www.mdpi.com/1099-4300/16/4/2161 Edit: formatting
My current side project is https://feedsub.com. Right now, - it's not great - I started by building an simple tool for getting regular updates from RSS feeds, but longer-term I want to turn this into a system which can absorb all the data streams you're interested in (news, stocks, weather, social, communities) and give you dials (filters, curation, signals, etc..) to surface a healthy amount wherever you want (SMS, email, web, RSS, chatbots, etc..).
The crux of the problem is endless scrolling feeds we're sucked into 24/7, which is why I based my MVP on email.
My current solution is trivial on a technical level. Honestly, my biggest problem is thinking about the problem on a non-technical level, balancing this with working life and branding, since my software and vision are very far apart right now.
(TBH, this isn't nearly as hard a problem as some of the others here - but I enjoy the ideas/feedback I get from communities like HN)
The regular iterator protocol, as today in rust, make hard to do stuff like JOINS, GROUP BY and other fancy stuff (because you need to decompose the computation in a partial state machine. This is hard even for a developer, impossible to ask for a regular data user). Also, you need to duplicate all that for async (with streams) and other abstractions...
I'm now trying to understand transducers (https://www.reddit.com/r/rust/comments/gqiyej/potentials_adv...) and stumble upon effects:
http://mikeinnes.github.io/2020/06/12/transducers.html
that look to my the clean way I wish to use.
But what look easy in python/f#/etc in rust is HARD to do. So I'm in a kind of limbo :)
Problem: Public schools in India don't do justice to students. Private schools in India charge a bomb but most of the money ends up in the hands of the "owners" and not enough to teachers (For reference, an average primary school teacher earns lesser than what an Uber driver earns) .
The solution : A network of "not-for-profit" schools where the fee structure is reasonable ( can't be free ), but the profits are shared amongst the people who make the schools run. Think "community banks" but for schools. I can't solve the problem for everyone but hope to set a good example by attracting the cream of teachers. It's time the teachers got their due.
Best is a mix of
- the ones I enjoy most
- but not one that I just visited last weekend
- where there has been recent snowfall / is predicted to snow
- that I have may have a season pass for
- where costs of highway tolls, petrol, hotels can be optimised
- driving time
- whether I’m going alone or with friends
- don’t have anything important at work on Monday
Eventually, I’d like to turn it into a kind of friend finder / social network thingy but for snowboarding.
More specifically, whenever you give a designer a design spec, it is always worth asking, how good is the best possible design for this spec? And, of course, can the designer actually achieve it, or something close to it? This is the question here.
In this scenario, the design spec is the optimization problem (what you want to optimize), the designer then gets to choose how to best approach this problem. In this case, you want to give a number that states, independent of how this problem is solved, what is the best any designer (no matter how smart or sophisticated, how much computational power they have, etc) can hope to do. In many cases giving such a number is actually possible! (See below references.)
-----
[0] https://pubs.acs.org/doi/10.1021/acsphotonics.9b00154 (PDF: http://web.stanford.edu/~boyd/papers/pdf/comp_imposs_res.pdf)
[1] https://arxiv.org/abs/2002.00521
[2] https://arxiv.org/abs/2003.00374
The mapping between the two is monotonic, non-linear, and can be stationary. If the tempo map is allowed to contain ramps (accelerando and ritardando in music speak), there are implicitly exponential sections in the function that maps between them.
Using floating point arithmetic leads to errors that have immediate effects. A musical time that should be considered to be at sample N is instead considered to be at sample N-1 or N+1.
It's surprisingly hard to do it correctly.
Despite this tight weight budget, I intend to build something rather interesting, but it is causing me to spend a lot of time in Fusion designing the parts along with slicing and reslicing 3D printed parts to shave partial grams of components to save a bit of weight.
I am partly obsessed with and researching / writing about how you could make carbon-free heating cheaper than fossil fuel based heating. If you can make geothermal systems cheaper than a natural gas furnace, then homeowners would have the same economic incentives as drivers, where operating an EVs is far cheaper and cleaner than ICEVs.
cryo file mangager is an attempt to get rid of the hassle: https://cryonet.io
Maybe the real hard problem is getting people to pay attention to what is happening in their world and to forecast what the impact of their actions will be in the next few decades.
I used to run match.com's research division. I left to pursue this project.
Unfortunately, many countries don't have useful e-IDs and the ones that do are limited to that one country. I want to create a single digital identity which works for everyone, for all applications, across borders. The basic features are:
- App based with no special hardware necessary.
- Privacy friendly with the user always fully aware of what data they are revealing.
- Simple to integrate for developers. It's a standard SSO flow over OAuth/OIDC.
I'm currently calling it Pass: https://getpass.app. If anyone wants to have a chat about digital identities you can reach me at fabian (at) flapplabs.se
Idea: A web browser for self-automation that learns semantics and sentiment of content on the web, whilst trying to respect references or articles’ sources to recorrect the ground truth.
Solution: Still far away from it, but am implementing a peer to peer Web Browser that can share its information (or states) with trusted peers. Trying to implement a recordable, editable and repeatable GUI for everything, which is a lot harder than it sounds.
[1] https://github.com/cookiengineer/stealth
All kind of feedback appreciated!
Another similar idea is a very simple rest protocol so that you can save to server and then make it easy to self host it.
I like the idea of people building apps as web pages without needing to worry about the server and the user owning their data but having the convenience of a cloud like solution where you just visit the site, log in and work.
How do you store this information in a database, in a geographically meaningful way? How do you represent it on a map?
Working on building a self-hosted app that would alow you to save, organise and search your knowledge in one center.
It would contain information like notes, bookmarks (it would download the links contents) and in general provide a programmable, opensource interface to preserve the info you'll find useful and even sync with external apis to save your online presence locally (think reddit posts, hn links, etc...)
Like a vim-like editor that translates your spec into generated code while you also see it on the go to fix any issues. Think yeoman on generics and steroids.
Unfortunately there seems to be a lot of infantilizing in and around the ADHD sphere. Either we're treated like helpless children or we are encouraged to lower our expectations/goals.
It's hard to explain unless you've been open about having ADHD, or have been part of the community. The zeitgeist revolves around accepting and maintaining a status quo. The problem I'm trying to solve is how to build a growth/thriving lifestyle despite an ADHD diagnosis.
I don't expect atomically precise manufacturing (at least in the form of molecular nanotechnology) to show up for at least a decade or two.
But the software to design things that can be built using nanotech can be written before the technology to synthesize them exists.
There is no such official documentation.
Python package: https://github.com/kotartemiy/pygooglenews
Blog post: https://codarium.substack.com/p/reverse-engineering-google-n...
I wrote a new programming language, Eek, because it was impossible for me to handle the complexity of doing all with current languages (that lack built-in support for asynchronous database access and parsing). So far the first generation of the programming language is working, but as an interpreter written in TypeScript, and I wrote an English language parser with it, and a simple database. Now I am working on a better, LLVM-based implementation of Eek. I started this thing about 3 years ago, and it will take some more years before this will be even demoable...
The idea is to assign a random 8 or 16 byte number (DdosID) to each connection. The DdosID is unrelated to any other identifier; like the HTTP3 connection id. The client puts the DdosID at the beginning of every UDP packets data section (raw; no encryption or compression.) Packets that have a valid DdosID are processed. Packets with an invalid DdosID are dropped. New connections use zero as the DdosID. Any client can use zero to re-establish a connection or if something goes wrong.
If attackers make random DdosIDs it will result in packets that are dropped before decryption. If a valid DdosID (or hundreds of valid DdosIDs) are used, the volume of packets with that DdosID will make the attack obvious, and the DdosID can be invalidated and the packets dropped.
Attackers will need to spam new connection attempts (DdosID zero) to deny service. New connection attempts could be dropped, or a small percentage allowed when the application can handle them. The rate of allowed new connection attempts could be adjusted very easily and quickly, but a high rate of bad packets would still deny new connections. Existing connections to continue to work.
The challenges are mostly creating functional thresholds: what packet volume is too high? how long is a DdosID valid? Should a load balancer record the IP and Port and to force a new DdosID if it changes?
I would also like to see a way that applications and load balancers can use a DdosID system without it being dependent on the specific application; DdosID should not depend on the protocol it's protecting.
Half of the Indian farmers do not have access to institutional/formal credit. They end of borrowing from local loan sharks at extremely high interest rates. The vicious cycle continues as they do have access to markets favoring their produce's selling price, and end up being exploited.
Most banks, despite their mandated percentage of loan for agriculture, are not too keen (and unable) to work closely with farmers.
## Solution
By leveraging technology, and international connections, we help farmers by giving them access to credit at a favorable interest rate. We work with international community, such as the Japanese, to lend their capital on our platform, thus helping the underserved farmers. In turn, the Japanese investors get returns much higher their national savings interest rates.
Banks too can benefit from our ability to deploy their cash.
## Why Us, Why Now
I have struggled to articulate the philosophy/motto on how to help the farmers, as they are the most exploited populace. I'm zeroing in on -- "BE KIND".
If you want to hear more, contact me and I will send you our Executive Summary and/or Pitch Deck. We are fund-raising.
What more, my history efforts ENDS around 1937; it's the BC part of computing that set the stage. The build up to computing.
Episode 0 lays it all out (http://comphistpod.com/introducing-the-computer-history-podc...)
I'm doing it in podcast form because it's 2020. This is going to be multiple years and I'm totally fine with that.
I divide the history up into multiple facets each with separate timelines.
Currently I'm going all the way through "electric communications" from the electric spark up through relay networks talking about switching, encoding, error correcting, signalling, all the important developments along the way.
In 2021 I'll close that out and do the same for storage (music boxes, looms, etc) and computation, starting with clocks, pascal's mechanical calculator and going forward from there, mechanical registers, overflow, adders, etc...
Each one is going to take at least a year or so.
I'm already about 2 months in to recording, about 6 months into the project. This week will be du fay, boze, desaguliers, watson, and the electric wire. Next week will be leyden jars, this is a long, slow project, and AFAIK it has never been done before.
I'm doing the odd episodes as a timeline and the even episodes as diversions and discussions to keep things entertaining and light.
This is the first time I'm talking about it publicly. It's at http://comphistpod.com
I’m looking for a good framework to simulate the physics and visualizations.
https://github.com/jaxcore/bumblebee
Although the first release is not officially out yet, the NodeJS code is working and you can install the development version of the app server and try out the hello world app locally.
The solution involves running Mozilla DeepSpeech inside an Electron desktop application with a websocket server and client API that NodeJS scripts can interact with, to receive speech recognition results, utilize "alexa" style hotword commands, and text-to-speech. The electron app handles all the heavy stuff, and you just use a simple API.
A web browser extension can also make use of this API to bring these capabilities to web sites, but that part isn't finished yet.
UX description language for forms that respects high level constraints. Compiles to desktop browser, phone browser, and Alexa layouts.
Solving the complexity of matrix matrix multiplication by brute forcing the lower bound with semigroup combinatorics.
DSL for linear logic.
Ending Iowa’s criminalization of “annoying” speech. (Iowa Code 708.7)
Exposing Polk County Iowa Sheriff Kevin Schneider torturing inmates with denial of basic medical care.
Exposing pure nepotism corruption between Iowa Attorney General Chief of Staff Eric Tabor and his sister Iowa Court of Appeals Judge Mary Tabor (mom of @ollie).
Exposing that the prosecutor on Tracy Richter’s murder trial had relations and ended up marrying the daughter of his star witness Mary Higgins - and that the blood spatter expert Englert is a known fraud who wrongfully convicted David Camm and Julie Ray Harper.
The technical aspect is mostly fun in my case :-)
After the twitter account of DDOSecrets got shut down (due Blueleaks), this got me thinking: How would you leak / provide data but are not directly attributable. (At least like a retweet - not your tweet, just amplified.).
And how to add some resilience and protection to the distribution, since there were indicators that the torrent and download of leaked data was being attacked.
So far, I have come up with an encryption matryoshka: you distribute leaks without telling whats in and enable gradually a few to look inside until they are public.
All thats missing is a better document describing it and a command line tool to help to walk through the multi level encryption ... so there is 90% still to do ¯\_(ツ)_/¯.
It’s not that simple.
Keep tables balanced, move players from high numbered tables to lowest numbered tables, try to move players to similar positions, Mark players waiting if they drop in small blind and skip the button next hand.
If you repeatedly flip a coin to either add 2 or divide by 2, what distribution do you approach?
If you have a distribution, what random processes are identities for that distribution? Which are stable? Which will reproduce the distribution from any starting distribution?
It’s related to matrix kernels, but it’s hard to generalize to continuous numbers. I spent a while looking into it years ago and couldn’t find much.
My goal was to eventually take it a step farther and create simple stochastic functions for certain behaviors, such as bistable switches, memory registers, etc.
I've programmed in python for much longer than R, and really want to be able to move at the same speed when using python for data analysis :o.
It's a weird problem though because the two languages have basically opposite approaches to DataFrames. pandas has a very fat DataFrame implementation, R an extremely minimal one. (Pros and cons to both approaches).
This is a hard problem. It may not even be perfectly solvable at any time in the future.
When you increase the size of any kind of raster image, you're creating new information from the old.
There's been some pretty good approaches out there, like this [1][2]. It uses a GAN topology for some impressive results, but is incredibly memory intensive and can take a very long time to run.
I've been working on something [0] for a good long while which is a less expensive approach. Instead of attempting to replicate everything 1:1, it intentionally allows some detail loss, whilst attempting to preserve everything important.
It's not ready for the public, and video still needs some significant improvements to remove some of the artifacts. But I've released one TV series upscaled with it [4], thus far.
But as it stands, at x2 and x2.5 scales, it does pretty well, with the average person preferring it to most other resizing methods. It doesn't reach the GAN-approaches quality, but you're looking at an average 12 seconds for upscaling, versus 80 seconds for the GAN approach for the same size upscaling and what people perceive as the same quality.
It already beats most of the traditional resizing algorithms pretty soundly. [3]
[0] https://git.sr.ht/~shakna/upscaler_analysis
[1] https://developer.ibm.com/technologies/artificial-intelligen...
[2] https://arxiv.org/pdf/1609.04802.pdf
[3] https://git.sr.ht/~shakna/upscaler_analysis/blob/master/crus...
[4] https://sixteenmm.org/blog/20200626-Gunsmith%20Hits%20HD
https://www.producthunt.com/posts/cozyroom
And figuring out how to make it easier to write and develop ideas through blogs:
These suits allow to survive for weeks out in the deep desert, by catching and recycling all of the body's lost water. Making sure no perspiration can escape is doable. Filtering the sweat to produce clean, salt-free water should be possible as well - membranes for water desalination do already exist today. To have all the required pumping action provided through walking and breathing is a mechanical problem that should be theoretically solvable. The big, unsolved problem I see, is heat.
The book says that the suit's layers closest to the skin allow the sweat to evaporate and thus provide cooling to the body. But the water then has to condensate again somewhere. From my (limited) understanding of the laws of thermodynamics, the amount of extra heat created through condensation, should be exactly equal to the amount of cooling the evaporation provides making the whole cycle a zero-sum affair. But for this cycle to work in the first place, the skin would have to be of higher temperature then the layer where the condensation occurs. If the desert heat is above body temperature, we'd need some sort of heat pump like in a fridge. Using changes in pressure and density (through a compressor) you could cool the suits' inside below body temperature, while heating the outside above ambient temperature - which is necessary to actually give off heat to the outside.
Those compressors are heavy and powerhungry though, and you'd need additional high pressure water lines through the suit - increasing the bulk of the whole thing considerably. Future technology might be more miniaturized and more energy efficient, but still... Maybe piezoelectric/thermoelectric cooling (exploiting the Peltier effect) would be a better choice for a suit like this. Then you'd have a light inner layer that allows for air circulation - so that the sweat can evaporate on the skin, and condensate again at the thermoelectrically cooled middle layer - where it's collected and pumped away to the membrane filters and catchpockets.
The outer layer of the suits would have to be made of flexible solar cells, in order to provide the electricy required for the cooling of the middle layer. Not sure if that could work though. Those solar cells produce a lot of heat on their own, and they sit right on top of the heat-producing side of the thermoelectric cooling. And the sun burning down on it as well - that's a lot of heat right there at the outer layer. I don't think thermoelectric cooling can overcome that high a temperature difference. It works best, when the temperature difference between the cool side and warm side is pretty small.
And I created a reverse image search to look up pill data: https://play.google.com/store/apps/details?id=be.harmreducti...
2: developing easy-to-use tools for parametric generative design to enable hyper-personalisation: https://hyperobjects.design
We've implemented recycling maximally and created programs for reuse, repair, toxic waste, general waste reduction, battery disposal, returnables, etc. But it isn't really making a significant impact.
People still spend most of their disposable income buying things they don't need that can't be easily repaired, or that they can't bother repairing, or that they discard due to fashion, or because they are mostly packaging, etc.
We can't get them to stop buying useless stuff that they don't need. We can't stop them from buying new cars every few years.
We can't get them to spend money on quality infrastructure, like insulation, that would reduce their energy needs by about 80%.
We can't get them to stop going to restaurants or getting food delivery or take-out, which expends a multiple of the energy and greenhouse gases of cooking your own locally-sourced food.
We can't get them to understand that we're all going to die in fairly short order unless we bring the environmental disaster under control.
Existing attempts to solve this problem are hackish and difficult to customize: they typically treat each glyph as a set of features and handle diacritics and digraphs by naive composition and awkward special-casing. They also aren't written with an eye to customization in either alphabet or featural model: they typically map an ad-hoc extension of IPA to an ad-hoc featural model.
I think a natural improvement would be to develop a specification language in which each individual glyph (base character or diacritic) is a function from a feature set to a feature set, with Haskell-style pattern-matching to allow graceful handling of digraphs and context-sensitive diacritics - although syntactic sugar for digraphs would in practice be required for usability. Ideally it would also be possible to map feature sets to feature sets, in order to preserve a human-readable intermediate form (e.g. "unvoiced dental plosive") which is later mapped to the more customary binary features.
In addition to the utility for phonological databases and the like, this would also enable more rigorous testing of crosslinguistic feature sets: every feature set is implicitly a set of proposed linguistic universals and existence claims. If two segments have the same featuralization, they should never contrast in a given language; if they do, the featuralization is unsound. And if a featuralization proposes the existence of many contrasts that aren't attested anywhere, it could probably stand to be optimized.
But most of my interest in this comes from my work on a phonological database. The database needs some method of handling featuralization to facilitate feature-based search, and I just haven't seen a good way to do that yet.
Politicians/companies spew words and we tend to accept them bc they're "authorities." I believe the best way to increase "skin in the game", accountability, and humble expertise is to predict and have your predictive performance be visible.
There's prediction markets to trade real money with but none that I've seen that ranks your performance against others.
I'm building a platform where it's free to enter/submit predictions in categories of interest. Top ranked players will receive prizes.
I think once players can measure themselves against "authorities" (whose public predictions will be scraped), both will become more accountable.
After predictions, I aim to work on "promises" since they both increase future skin in the game.
If you've read this far, you should consider joining :) - https://oraclerank.com
It's essentially a problem of distributed timers and distributed transactions.
(If anyone has any resources on how similar problems have been solved in the past, I'd appreciate it.)
My live goal is to bring back more programming and design techniques that we as a society used to make the best ps1 and gameboy games and old school animations to the way we are currently developing games. (Any tips and advice how i could help improve the game development industry is welcome. Next year I will be doing my masters so i'm also still looking for a subject for that. The last couple of months I have been experimenting with water color effects in openGL, so I'm looking for something like that)
I’m about 12 years into it and expect I have a couple/few more to go. Hopefully people still use HTML by then.
Just about every company these days has their data spread out all over the cloud: marketing data on Facebook and Google, social media data on Twitter and Snapchat, customer data on Salesforce, sales data on Shopify and Amazon, and so on. Most companies will either (a) hire a team of data engineers to collect and exploit this data or; (b) hire an expensive consulting firm to build an ETL pipeline, or; (c) let this data rot in the cloud. For the past 6 years, I've worked as a data engineer (where I became intimately familiar with Facebook and Salesforce APIs), and I'm confident that I can automate around 80% of my job.
It's clear that the value prop is astronomical: just one data engineer will run you at least 150k/yr and most of the work will involve maintaining API data pipelines. Having a "one-click" solution where one simply provides an API key and what data they'd like to warehouse (e.g. marketing data, social media data, customer data) and where (FTP, S3, Redshift, DynamoDB) would be invaluable to companies that want to make sure they exploit this treasure trove.
Some hard/interesting problems:
- API specs constantly change (Facebook, for example, has a quarterly update schedule)
- Inferring JSON schemas is hard
- Data integrity is hard (data types sometimes change willy-nilly)
- API rate limiting is tricky
- Resilience is hard
- Recovering old data (especially for certain services) might be impossible
Everyone is starting to become keenly aware that letting the data rot is starting to have a higher and higher opportunity cost. Not warehousing your own data is simply not a tenable option any more: the world’s most valuable resource is no longer oil, but data[1].
[1] https://www.economist.com/leaders/2017/05/06/the-worlds-most...
We are trying to develop a unified service-building experience wherein the user will be able to punch in their requirements and get a product tailor-made for them, along with the estimated price (which comes with ~10% tolerance). It's tough, but we're getting there.
Problem: Currently, users (or attackers) can easily manipulate the location provided to an app on a phone.
Solution: Use raw measurements from positioning sattellites to check if the location reported by a user actually lines up with the measurements of their phone.
Why is it hard? - Lack of documentation, standardization and support on collecting raw measurements on phones. - Processing raw measurements is tricky - Finding anomalies in this raw data is even harder
Some of it is working - yay! - and there's also a public API, such that others can use it too: https://claimr.tools
Right now the tools we have for programmatically reading through code are: 1. Code search, which is fast, but inaccurate/heuristic. 2. Static analysis, which is slow to run and difficult to write, but very accurate.
I'm building a tool that is as fast and easy to use as code search, and is as accurate and expressive as static analysis.
Still just a landing page [0]. Looking to get a public playground people can mess around with this week.
The same principle can also be used to create a real-time software harmonizer [1] for live performances, but this problem already has a reliable solution through hardware.
Currently, I am trying to hook them together and come up with basic APIs to control them. Next is printing a 3D model.
I have a bunch of tasks I have to complete by a certain deadline — these include things like engineering sprint tasks, drafting a design document, completing an assignment, etc. I have to get these tasks done in between my regular schedule of meetings, lunch breaks, and rests. I want to get a program to tell me when I should work on what depending on a task's due date and priority. If something comes up, I want my schedule readjust to accommodate the interruption.
A positive streak can easily turn into a negative streak.
At the moment I’m brainstorming with a google sheet and a few manual tweaks. I believe there’s a way to help people/myself build a better life by removing the nasty things (like smoking) and grow the nicer things like healthy eating or exercise. But our desire for instant gratification and our lofty goals get in the way.
My proposal is to take out classified ads in newspapers, couching it as a code or puzzle to be solved, and put in dates of major future events you know about. D-day, Kennedy assassination, Challenger explosion, 9/11. Top it off with Murder Hornets and they'll know about when you came from.
I am working on this at the moment, and no, I don't expect that what I write here will convince anyone. And I don't have any summaries of the work at the moment.
So far, I have full baseline feature support for everything back to Mosaic, including Lynx, IE3, Netscape3, Opera3, and many others.
At the same time, still including advanced features like client-side PGP for browsers which will support it.
Every browser presents its own challenges, and it is not always the oldest ones which have the dumbest behaviors.
My intent is to promote interoperability and offer something as an alternative to today's near-monoculture.
It's inspired by the CellSol network in the "Left Behind" novels.
Schems and code are at https://www.aaronswartzday.org/lora/ and we could use help.
Getting discussions about these issues to include financial incentives is a weird and hard problem. People are too sure the players (on their team) are alturistic.
It will have all features that WooCommerce have but much stable and easily customisable.
Harder: People
Impossibly hard: Help people get faster at building tech
And yet it's all I want to do.
https://medium.com/@patriarch_39868/donald-trump-detector-ec...
Currently researching if it's needed and if people would pay for this. Note it's only useful for gaming/cloud gaming, not for other applications.
For now, my team and I are focusing on video-conferences, but the end goal is much larger :)
Now that I think of it, this problem is a lot easier than what others post here.
It turns out semi-supervised document recommender systems aren't easy to bootstrap with zero user data.
Hard problem:
How do we evolve design tools? Can Sketch/Figma be evolved to create full featured software? [1]
Something with no limits, and the freedom to create any feature developers create today with React/Angular/Vue.
Is it possible or a pipe dream?
The hard problem is multifaceted:
I would argue that open source as we know fails to balance the market. We now have monopolistic tech incumbents in the “GAFAM“ companies, that thrive on open source while paying little tax and outcompeting actual tax paying businesses. I see maintainers either burning out or selling out to venture capitalists.
I want to believe in free and open source, but I also see that it fully enables surveillance capitalism, casino capitalism and tax avoiding monoliths.
So, I realize that I need to move past classic licensing and consider ethical licensing that try to remedy society’s inequalities and injustices.
Call me a cyber hippie, but if I want to build cool stuff in my spare time to share I want to maximize its chances of doing something good in the world. To that end I’m evaluating some ethical licenses.
There are many ethical licenses out there which are evolving. Presently, I’m evaluating this one: The (Cooperative) Non-Violent Public License: https://thufie.lain.haus/NPL.html
After the weekend I’ll try to get in touch with a Lawyer to review the license implications. It’s arguably not open source in definition, but maybe more so in spirit.
Solution : Abstract the problems and automatically generate framework code.
Demo project : https://github.com/imvetri/ui-editor
I'm working on setting up a webserver such that it will:
A) Stay up
B) Stay up to date
C) Stay uncompromised
For the entire time I am gone with no interaction.
Or rather, that's what I want to be working on. Instead, I'm working on making myself not be a lazy [blank].
This is just one part of one of the sillier things I'm working on/thinking about. How can I make a real-time interactive soil simulation work, essentially a big realistic virtual sandbox.
Since this is a side project, it'll go as far as all side projects go. =)
I don’t know about other parents here, but cracking the code to my 3.5 year old is a really hard problem for me.
Basically google spreadsheets but not google and not spreadsheets in a webapp.
Haven't done any coding on it, but have been mulling the design for several months now.
And failing to do so.
THE PROBLEM
How do you automate and aggregate context across business departments for various forms of activity, and then map that to marketing analytics in a way that gives relevant and sufficient insights beyond just channel or user data? How do you more fully answer the question of "what happened when [$thing happened]?"
THE VALUE OPPORTUNITY
Countless people hours and marketing dollars are wasted going down fruitless rabbit holes looking for what caused some change, or thinking they found the cause in a change in performance and pursuing that when it reality it was something else. In many of these cases, this could have been easily avoided if only there were sufficient data on the business activities (internal and external) logged and aggregated with marketing data in a way that was then automatically surfaced in an appropriate manner. As the scale of the company increases, so does the impact of this.
WHY IT IS WEIRD/HARD
It's weird in the sense that only a small subset of people are immersed in analytics enough are aware they should care about it, and probably fewer geek out enough about marketing analytics and process to care about trying to solve it. It is hard because it is just as much a people challenge as a technical one. The technical side is somewhat straightforward in terms of aggregating as many data inputs as you can--it's basically a ton of data plumbing and monitoring for changes with that. Whether that's bid management platforms and DSPs or SSPs, email platforms, site analytics, etc. But then also project management tools and properly categorizing the meta data for relevant updates to be surfaced. You have challenges around walled data gardens and comparing apples to oranges around things like attribution measurement, but that is something that can be handled. Surfacing it in timely and sufficiently useful ways is an interesting design and UX challenge though, from annotations and "pull" data, to modals and callouts that are more "push" in how they inform people of context before it bites them.
The people side however, is constantly in flux in a way that the data side is not. Some aspects of this absolutely rely on consistent adherence to process to capture key data that is hard to slurp up through an API. Some of it is quite ephemeral. I've encountered team situations where people object (or struggle to due to limited training) to filling out a couple fields in a Google Sheet, or need to be hounded to fill out a given form, etc. Some companies can enforce this to levels others cannot. Things also get really interesting at large companies (think FAANG). You're dealing with many teams, many overlapping or conflicting processes such a solution would need to be embedded into, localization, internal/external vendors of varying levels of visibility needs, and also personalities who may want more control over their orgs' processes and need persuading.
At the end, this all needs to be balanced against how much utility you get out of the insights because it is easy to over-index on investing in building this tech and process out only to not get insights out of it. Unfortunately you often only learn that after the fact when you've been bitten by it.
If there's any companies trying to solve for this, please do reach out (see profile). I love chatting about it and want to help build the tools and processes that solve for this at scale and have ~15yrs experience in the space, a good chunk of which have been spent trying to solve for variations of this.
- Of the countries in the world, none are really free, and generally have quite burdensome taxes relative to their benefits.
- Representative democracy has not improved a whole lot from the v1.0 created a few hundred years ago. For example, voting is largely between "person I don't really like" and "person I hate," and you're just 1 vote in millions, meaning voting is irrational (https://en.wikipedia.org/wiki/Paradox_of_voting)
- In referendums, people vote without really knowing much about what they're voting on.
- There's a lot of people who want to move to a better country, but immigration policies are very restrictive.
Solution:
- To create a new country, Sordelia, based on liberty, sortition, and deliberative democracy.
- Laws are voted on based on a small, randomly selected citizen's assembly. The random selection keeps the assembly small, so every vote counts, yet is statistically significant so that, if a law passes, we're highly confident it would've passed before all the voters. This assembly will learn and debate the pros and cons of the proposal before voting.
- Laws that would violate fundamental principles (like freedom of speech) are disallowed.
- We aim to purchase land from a developing country. There are plenty of good reasons they'd sell to us, beyond the immediate payment. The developing population and infrastructure will bring an increase in trade and jobs for their country. There have been many historical examples of special economic zones (SEZs) having positive influence on their neighbors, and Sordelia's effect on nearby countries will be similar.
- We'll have pro-growth and pro-immigrant policies to attract people to our country.
- The rise of remote work makes this even more attractive for moving to Sordelia.
If this interests you, join us at:
how can we streamline the transfer of knowledge from the oldest, wisest, individuals on the cutting edge of their field to the youngest, most ambitious sponge-like individuals just starting out their careers?
I built an MVP to solve this at a hackathon: https://devpost.com/software/oravise
anyone interested in collaborating?
Avesnetsec.com
I work for a research hospital.
Why? Because it is cool and I want to learn more about compilers.
I've got a problem that's driving me crazy: find a way to present a huge amount of written (and some visual) content, through a website[1], in an interesting and accessible way.
One of my hobbies is constructing worlds. One world. I've been working on it for over 40 years now. 20 years ago I decided to share my work with the world, and built a website for it. The approach I took them was to divide the site into a homepage, with links to more-or-less self-contained subsections. The solution works, but I want something better. I just don't know how to achieve it.
A wiki-based approach[2] would seem to be the obvious choice. But my experience of wikis is that they feel too, well, fragmented. There's ways of overcoming this (portals, wikibooks, etc) but I tried building a wiki ... it didn't work for what I wanted to achieve.
Another approach could be to give up on building my own site and instead rely on a cloud provider to do all the hard work for me[3]. But I dislike this idea on every level I can think of. For a start, what happens to all my work when the host company collapses, or pivots to a more profitable idea? What happens when I want to introduce a feature - "teach yourself language X" lessons that the site architecture doesn't support?
Of course, if I had all the money in the world then I could employ many very clever people to design, build and develop content for a truly wonderful user experience[4][5][6] ... yeah. That's not gonna happen. So the solution needs to be "doable".
Any ideas on how to solve the my problem will be very gratefully received!
[1] - http://www.rikweb.co.uk/kalieda/index.php - My constructed world's current website. I used to be very proud of the site's structure and design. But I know in my hearth it can be much better!
[2] - Encyclopaedia Ardenica - https://www.otherworldproject.com/wiki/index.php/Main_Page - is a really nice example of what can be achieved with a well-thought-through wiki.
[3] - WorldAnvil - https://www.worldanvil.com - seem to be the leading example of this approach.
[4] - The Potterverse - now at https://www.wizardingworld.com/ - seemed for a while to be a Gold Standard for wonderful fantasy world websites. The Pandoran Research Foundation [5] (https://www.avatar.com/) is another excellent example. But even here, their Pandorapedia [6] (https://www.pandorapedia.com/pandora_url/dictionary.html) ... it feels like it could be so much better - but how?
I'm learning a lot about audio engineering and wave mechanics.
( I also co-wrote a story about what happens if I fail, which you can read at https://emlia.org/pmwiki/pub/web/LeftBeyond.TalesFromTheBeyo... )