Jordan's blog
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Jordan's blog
I think maybe I'll try blogging here for a while just to give people an idea of what I'm personally doing. Maybe about once a week.
This week, I finished the conversion tool for switching from WinForms to WPF. We have about 1000 windows to convert, so we're starting with the simple small windows. The tool converts a form to a window, and then an engineer has to fix a few little things and test it. Finally, I review it. But only the simplest forms can currently be converted, so I'm continuing to work furiously on building all the new controls, each with their various properties and methods. The grid is a really complex one. Every day, I make a little more progress on it. Last night, I got blazing fast row selection working. It looks and feels absolutely identical to the current WinForms grid, but the drawing technology is completely different. Why are we going to all this work? Well, WinForms doesn't handle high dpi screens very well at all. We already worked for two years on that issue, which was why we added the Zoom feature. We pushed WinForms right to the limit of what it can handle. But there are still so many issues on high dpi that we have to switch to a whole new underlying technology. But the hard part is doing it one window at a time. Once I figured out a way to do that, we were finally able to start the switch. It will take months/years, but we're now moving.
Yesterday, the server room started getting hot. Looks like we are at the limit of what our 4T A/C can cool. Time to add another A/C and start planning a new much larger server room. I'm trying to think really big because I need it to last longer than 5 years. Our current server room has space for 9 racks and we are filling it up far faster than I imagined. So I'm thinking more like 100 racks for the next one. We'll see. I'll spend a few hours laying it out and then add some calculations for A/C requirements, backup generator requirements, etc. It might be time to switch to a raised floor so that the A/C can come in low in front of each rack.
I'm using ChatGpt daily to help with programming. It can come up with algorithms a little bit faster than I can alone. It might have to try three times, and it might completely lie every now and then, but it always tries to tell me what it thinks I want to hear. I'm thinking of AI now as a universal translator. Input text, get an image. Input video, get code. Input voice commands, get robotic movements. Literally anything with a pattern can be the input or the output. It's really quite amazing. Over the next 10 years, I expect it to morph into a personal assistant that I've trained on my data and which can help me rearrange, sort, and summarize my data on the fly. I want to ask it where my slippers are, I want it to automatically adjust water temp for me based on previous commands, and I want it to be my super smart assistant that cranks out C# code and edits just the way I like it with my very specific style. I'm waiting for the day when I can have it look at the entire code set and make strategic suggestions and then implement those suggestions. And yes, I expect this to finally make humanoid robots feasible. It will still take 10-20 years, and we will need some better computer chips, but I now think I will see robots in my lifetime. I was skeptical until I started using AI. And that will change everything, of course.
This week, I finished the conversion tool for switching from WinForms to WPF. We have about 1000 windows to convert, so we're starting with the simple small windows. The tool converts a form to a window, and then an engineer has to fix a few little things and test it. Finally, I review it. But only the simplest forms can currently be converted, so I'm continuing to work furiously on building all the new controls, each with their various properties and methods. The grid is a really complex one. Every day, I make a little more progress on it. Last night, I got blazing fast row selection working. It looks and feels absolutely identical to the current WinForms grid, but the drawing technology is completely different. Why are we going to all this work? Well, WinForms doesn't handle high dpi screens very well at all. We already worked for two years on that issue, which was why we added the Zoom feature. We pushed WinForms right to the limit of what it can handle. But there are still so many issues on high dpi that we have to switch to a whole new underlying technology. But the hard part is doing it one window at a time. Once I figured out a way to do that, we were finally able to start the switch. It will take months/years, but we're now moving.
Yesterday, the server room started getting hot. Looks like we are at the limit of what our 4T A/C can cool. Time to add another A/C and start planning a new much larger server room. I'm trying to think really big because I need it to last longer than 5 years. Our current server room has space for 9 racks and we are filling it up far faster than I imagined. So I'm thinking more like 100 racks for the next one. We'll see. I'll spend a few hours laying it out and then add some calculations for A/C requirements, backup generator requirements, etc. It might be time to switch to a raised floor so that the A/C can come in low in front of each rack.
I'm using ChatGpt daily to help with programming. It can come up with algorithms a little bit faster than I can alone. It might have to try three times, and it might completely lie every now and then, but it always tries to tell me what it thinks I want to hear. I'm thinking of AI now as a universal translator. Input text, get an image. Input video, get code. Input voice commands, get robotic movements. Literally anything with a pattern can be the input or the output. It's really quite amazing. Over the next 10 years, I expect it to morph into a personal assistant that I've trained on my data and which can help me rearrange, sort, and summarize my data on the fly. I want to ask it where my slippers are, I want it to automatically adjust water temp for me based on previous commands, and I want it to be my super smart assistant that cranks out C# code and edits just the way I like it with my very specific style. I'm waiting for the day when I can have it look at the entire code set and make strategic suggestions and then implement those suggestions. And yes, I expect this to finally make humanoid robots feasible. It will still take 10-20 years, and we will need some better computer chips, but I now think I will see robots in my lifetime. I was skeptical until I started using AI. And that will change everything, of course.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
I'm on vacation now. That's very exciting because I can get lots of work done on the plane, in the evenings, etc. I'll build and enhance a number of WPF controls so that we can convert some more complex windows in a few weeks. A really complicated one will be the perio chart. It's very old. There are actually about 4 layered overhauls that we will do in rapid succession: 1. Fix all the badly named variables so that we can see what's what. 2. Add some extra navigation with up/down arrows or single letter jumping so that external voice to text software can navigate to the different kinds of rows for a tooth. We already have an external company ready to go with this. They said it's AI-based. We'll see. 3. Fix all the old patterns, like converting arrays into lists and things like that. Probably enhance the auto advance a little bit at this point. 4. Convert it to WPF so we get flawless drawing at any dpi.
We finally got the permit for Bldg H. It took over six months just waiting for the permit. Bldg H is 40,500 sf. and 3 stories tall. The entire first floor is a daycare for 100 kids: infants, toddlers, and preschool. We won't include any school age children. This should give us roughly 100 good quality employees, many of whom can't work right now because of the 3 year waiting list for childcare at every facility in town. The second and third floors will be call center and should hold a total of about 250 additional employees. We don't quite need that space yet, but it's better than the stress of trying to build at the last minute.
I'm going to purchase an Apple Vision Pro the moment they come out. I have never purchased an Apple product in my life except an early iPhone. The main reason is that C# has historically only worked on MS computers, so I've had to stay in the MS ecosystem. That's gradually changing, although Open Dental uses a lot of Win32 API calls for complex things, so it won't be terribly soon. But I really need a VR headset. The only thing I've been waiting for is more pixels. The Vision Pro will have 4k per eye, and by just turning my head, that gives me maybe about nine 4k screens all spread out in front of me. That's going to be a productivity booster, especially for situations like right now when I'm typing on a tiny 15" laptop. It's unbearably inefficient. I frequently need to see diffs (two versions of code side by side). There are just barely enough pixels on a 4k screen for that task. My hope is that the Vision Pro will let me somehow remote connect to my MS PC. It's just a video feed, so it should be possible. Then, I can use other apps built into the Vision Pro for additional screens, like browsers, where it doesn't matter if it's Apple. Hopefully my neck won't give out under the weight of that thing until they have time to build lighter versions, which they absolutely must do. If I was designing it, I would try to move some or most of the computing power off to the external unit connected by the cord. And I would certainly get rid of the heavy glass. People don't need to see my eyes when I'm by myself. Finally, I would ditch some of the cameras to keep it lighter. The only camera I need is one down low to see my real keyboard.
We finally got the permit for Bldg H. It took over six months just waiting for the permit. Bldg H is 40,500 sf. and 3 stories tall. The entire first floor is a daycare for 100 kids: infants, toddlers, and preschool. We won't include any school age children. This should give us roughly 100 good quality employees, many of whom can't work right now because of the 3 year waiting list for childcare at every facility in town. The second and third floors will be call center and should hold a total of about 250 additional employees. We don't quite need that space yet, but it's better than the stress of trying to build at the last minute.
I'm going to purchase an Apple Vision Pro the moment they come out. I have never purchased an Apple product in my life except an early iPhone. The main reason is that C# has historically only worked on MS computers, so I've had to stay in the MS ecosystem. That's gradually changing, although Open Dental uses a lot of Win32 API calls for complex things, so it won't be terribly soon. But I really need a VR headset. The only thing I've been waiting for is more pixels. The Vision Pro will have 4k per eye, and by just turning my head, that gives me maybe about nine 4k screens all spread out in front of me. That's going to be a productivity booster, especially for situations like right now when I'm typing on a tiny 15" laptop. It's unbearably inefficient. I frequently need to see diffs (two versions of code side by side). There are just barely enough pixels on a 4k screen for that task. My hope is that the Vision Pro will let me somehow remote connect to my MS PC. It's just a video feed, so it should be possible. Then, I can use other apps built into the Vision Pro for additional screens, like browsers, where it doesn't matter if it's Apple. Hopefully my neck won't give out under the weight of that thing until they have time to build lighter versions, which they absolutely must do. If I was designing it, I would try to move some or most of the computing power off to the external unit connected by the cord. And I would certainly get rid of the heavy glass. People don't need to see my eyes when I'm by myself. Finally, I would ditch some of the cameras to keep it lighter. The only camera I need is one down low to see my real keyboard.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
I switched to a standing desk. Sitting is an independent risk factor for diabetes, probably because constant muscle work is needed to process sugar. The brand we use is from Autonomous.ai. I'm not the first at the company. We already have hundreds of them. I thought I would get tired, but not as much as I thought. One button to go to sitting height and another to go to standing height. It works really well. I went with an L-shaped desk with three legs. Now I have to switch out the one at home.
I haven't had very much time to program over the last week. Just catching up on e-mails, working on the new building, reviewing code, taxes, etc. It's all the stuff that I don't enjoy. But I feel like I'm getting closer to being able to have some fun.
It feels like we're getting ready for another release. I really don't pay too much attention to the release schedule, so I'm not sure. But I've been involved with some work on Sheets and some other areas that need to be done quickly, so it's probably coming soon.
A few months ago, scientists reversed the age of mice:
https://www.cell.com/cell/fulltext/S009 ... ctitle0010
They did it by turning on 3 of the Yamanaka factors. If you turn on all 4 Yamanaka factors, then the cells become pluripotent stem cells. They've been doing that for over 10 years, and Yamanaka won the Nobel prize for figuring it out. Well, with 3 of the factors, the cells literally become young again, but they stay their same type instead of becoming iPS cells. This is stunning. I never thought this would happen in my lifetime. It means that I might just achieve aging escape velocity in about 40-50 years -- if I can live that long. So I've been putting a lot more work into my anti-aging routine. This includes rigorous diet, exercise, sleep, some supplements, and intermittent fasting on daily 18/6 cycle. Oh, and lots of flossing, of course. I thought exercise would steal the most of my time, but I think food prep actually takes more time than exercise. Lots of broccoli carrot slaw, black bean and corn salad, salmon, oatmeal with milled flax, etc. So the real trick is to maximize all the anti-aging routines while also minimizing how much time they steal. I'm getting better and better at this. It's a skill. I was also surprised at how long it took to go from being completely sedentary for 30 years to reaching a certain level of fitness. What I assumed would be a 1 to 2 year journey is looking more like a 10 year journey. The body has a strong physiological memory, and changing that set point is not trivial at all. Losing fat goes pretty quick, but tendons and bone just take forever to adapt in an old guy.
I haven't had very much time to program over the last week. Just catching up on e-mails, working on the new building, reviewing code, taxes, etc. It's all the stuff that I don't enjoy. But I feel like I'm getting closer to being able to have some fun.
It feels like we're getting ready for another release. I really don't pay too much attention to the release schedule, so I'm not sure. But I've been involved with some work on Sheets and some other areas that need to be done quickly, so it's probably coming soon.
A few months ago, scientists reversed the age of mice:
https://www.cell.com/cell/fulltext/S009 ... ctitle0010
They did it by turning on 3 of the Yamanaka factors. If you turn on all 4 Yamanaka factors, then the cells become pluripotent stem cells. They've been doing that for over 10 years, and Yamanaka won the Nobel prize for figuring it out. Well, with 3 of the factors, the cells literally become young again, but they stay their same type instead of becoming iPS cells. This is stunning. I never thought this would happen in my lifetime. It means that I might just achieve aging escape velocity in about 40-50 years -- if I can live that long. So I've been putting a lot more work into my anti-aging routine. This includes rigorous diet, exercise, sleep, some supplements, and intermittent fasting on daily 18/6 cycle. Oh, and lots of flossing, of course. I thought exercise would steal the most of my time, but I think food prep actually takes more time than exercise. Lots of broccoli carrot slaw, black bean and corn salad, salmon, oatmeal with milled flax, etc. So the real trick is to maximize all the anti-aging routines while also minimizing how much time they steal. I'm getting better and better at this. It's a skill. I was also surprised at how long it took to go from being completely sedentary for 30 years to reaching a certain level of fitness. What I assumed would be a 1 to 2 year journey is looking more like a 10 year journey. The body has a strong physiological memory, and changing that set point is not trivial at all. Losing fat goes pretty quick, but tendons and bone just take forever to adapt in an old guy.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
I think my posts will tend to get a bit more technical from here on out, since that's what consumes most of my time and attention.
I've been working on a variety of properties for WPF, like IsEnabled and ReadOnly. I also added a PictureBox control. There are an awful lot of little details that need to be perfect for the WPF transition. Yes, it seems a bit quixotic sometimes, and it could take many years. If we do it well, nobody will even notice, but it will just gradually work better and better on higher dpi monitors and it will gradually become more and more stable. Toward the end, as we move the main modules over to WPF, I'm hopeful that it will also allow some new features with overall window layout. Is there another way that might be simpler and give some sort of more obvious result? I don't think so, not without creating spaghetti code. This is a huge investment of time and money that will reap rewards years from now. I know it needs to be done so we're just doing it. I expect a similar transition will be needed every 20 years or so. 51 windows done; 980 to go. Progress will be kind of like a bell curve: slow at first, fast in the middle, and then slow as we get to the end.
Now it's time to implement TabIndex so that you can tab from textbox to textbox in the right order, typically downward. Unfortunately, WPF implements TabIndex differently than WinForms, so we have a little bit of work to do. Here's my plan:
1. Sort out the way keyboard and logical focus works in WPF and build patterns for use in the new framework. It seems different from WinForms, so we have to get very deep into how it works to make sure we get it right. This will take about a week.
2. Set TabNavigation mode to Local for all of our container base controls. This should make it behave the same as WinForms. The other modes don't seem useful in our situation.
3. Add a TabIndex property to most of our custom controls. This would hide the existing property and allow us to put it in the OD category which is more convenient.
4. The WPF default for TabIndex is int.MaxVal, which is 2147483647, effectively making it always last. This was not the case in WinForms, where TabIndexes usually had somewhat random numbers and duplication, and typical values in the single or double digits. I don't think we really want to convert all of that clutter. We might only convert a range of TabIndexes, under control of the engineer during a conversion.
5. Implement the conversion script.
6. WinForms had a GUI tool that allowed clicking on controls in succession to set TabIndex. WPF lacks that feature because they put more focus on flow-style windows. We will probably partially duplicate this tool somehow. One way would be to flip a boolean property of the Window in design mode. This would cause all the controls to draw differently, with a blue square and a white number at the upper left. This extra graphic would be implemented inside each of our controls. Build it once, copy it a dozen times. Another trick that would speed up development is that if we change the value of one TabIndex, it should fix the others, like removing duplicates and gaps. This automation would probably only affect TabIndexes greater than the one being set, but we'll see.
7. Look at how IsTabStop interacts with TabIndex in both frameworks. Maybe IsTabStop doesn't matter at all. If it does, make sure it's getting converted.
8. Test various nesting scenarios and containers.
I've been working on a variety of properties for WPF, like IsEnabled and ReadOnly. I also added a PictureBox control. There are an awful lot of little details that need to be perfect for the WPF transition. Yes, it seems a bit quixotic sometimes, and it could take many years. If we do it well, nobody will even notice, but it will just gradually work better and better on higher dpi monitors and it will gradually become more and more stable. Toward the end, as we move the main modules over to WPF, I'm hopeful that it will also allow some new features with overall window layout. Is there another way that might be simpler and give some sort of more obvious result? I don't think so, not without creating spaghetti code. This is a huge investment of time and money that will reap rewards years from now. I know it needs to be done so we're just doing it. I expect a similar transition will be needed every 20 years or so. 51 windows done; 980 to go. Progress will be kind of like a bell curve: slow at first, fast in the middle, and then slow as we get to the end.
Now it's time to implement TabIndex so that you can tab from textbox to textbox in the right order, typically downward. Unfortunately, WPF implements TabIndex differently than WinForms, so we have a little bit of work to do. Here's my plan:
1. Sort out the way keyboard and logical focus works in WPF and build patterns for use in the new framework. It seems different from WinForms, so we have to get very deep into how it works to make sure we get it right. This will take about a week.
2. Set TabNavigation mode to Local for all of our container base controls. This should make it behave the same as WinForms. The other modes don't seem useful in our situation.
3. Add a TabIndex property to most of our custom controls. This would hide the existing property and allow us to put it in the OD category which is more convenient.
4. The WPF default for TabIndex is int.MaxVal, which is 2147483647, effectively making it always last. This was not the case in WinForms, where TabIndexes usually had somewhat random numbers and duplication, and typical values in the single or double digits. I don't think we really want to convert all of that clutter. We might only convert a range of TabIndexes, under control of the engineer during a conversion.
5. Implement the conversion script.
6. WinForms had a GUI tool that allowed clicking on controls in succession to set TabIndex. WPF lacks that feature because they put more focus on flow-style windows. We will probably partially duplicate this tool somehow. One way would be to flip a boolean property of the Window in design mode. This would cause all the controls to draw differently, with a blue square and a white number at the upper left. This extra graphic would be implemented inside each of our controls. Build it once, copy it a dozen times. Another trick that would speed up development is that if we change the value of one TabIndex, it should fix the others, like removing duplicates and gaps. This automation would probably only affect TabIndexes greater than the one being set, but we'll see.
7. Look at how IsTabStop interacts with TabIndex in both frameworks. Maybe IsTabStop doesn't matter at all. If it does, make sure it's getting converted.
8. Test various nesting scenarios and containers.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
I think a lot about what Open Dental might look like in 20 years. I'm pretty certain that users will have the ability to interact with it in ordinary spoken language. This would mostly be useful for new users who don't know where things are or why things look the way they do. Once you understand everything, it's probably faster to type and click. But even a power user sometimes gets a little stuck and needs fast help instead of having to call us. So I like to think about where AI might get its answers from. I see a few sources of info:
1. It will read our manual. So the more complete our manual, the better it will be able to answer.
2. It will read our source code. I find myself looking at the actual code sometimes when the desired behavior is not documented. It could do the same.
3. It will read your database, either directly or through the API.
4. It will click around inside of Open Dental and then read what's on the screen. This is known as Robotic Process Automation (RPA), and is a common way to currently run macros, but it could be a LOT better once the AI could understand what it was doing.
And then, once it has an answer, it might use RPA to show you something or take you where you need to be. I can't really think of anything we can do to prepare for intelligent RPA other than just continuing to work hard to make the UI clear and functional. If it works for a human, it will work for RPA. It might also mingle RPA with calls to our API to perform actions. Using the API can be a little more robust and a little faster. It would also allow actions to be invisible to the user and not pull them away from their current screen. It's going to be exciting for sure.
Of course, everyone's been trying to do all of this since the dawn of computers. Clippy, Alexa, and Cortana are just a few of many historical examples. So it's obviously very hard to predict when it will actually happen. But when it does happen, it will make things so much more efficient.
1. It will read our manual. So the more complete our manual, the better it will be able to answer.
2. It will read our source code. I find myself looking at the actual code sometimes when the desired behavior is not documented. It could do the same.
3. It will read your database, either directly or through the API.
4. It will click around inside of Open Dental and then read what's on the screen. This is known as Robotic Process Automation (RPA), and is a common way to currently run macros, but it could be a LOT better once the AI could understand what it was doing.
And then, once it has an answer, it might use RPA to show you something or take you where you need to be. I can't really think of anything we can do to prepare for intelligent RPA other than just continuing to work hard to make the UI clear and functional. If it works for a human, it will work for RPA. It might also mingle RPA with calls to our API to perform actions. Using the API can be a little more robust and a little faster. It would also allow actions to be invisible to the user and not pull them away from their current screen. It's going to be exciting for sure.
Of course, everyone's been trying to do all of this since the dawn of computers. Clippy, Alexa, and Cortana are just a few of many historical examples. So it's obviously very hard to predict when it will actually happen. But when it does happen, it will make things so much more efficient.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
I've been working on lots of little things:
-Rich text box. Ours is very customized, with a number of dependent popup windows for things like autoNotes and quickPasteNotes. So there are lots of moving parts. It's maybe 50% done.
-Filter controls. This is chunk of code that someone wrote about 10 years ago, and I'm modernizing it. In certain windows, there are filters at the top for things like name and date. As you type, you want the grid below to update. But you also don't want it to lock up with each keystroke. So the db query goes on a background thread. It gives the window a nice clean feel. That overhaul is now done, so this won't be an obstacle to converting certain windows.
-Progress bar. We have our own progress bar window that comes up for longer operations. I've moved it over to WPF. I should review the threading one more time, but it's essentially done.
-Grid sorting by column.
-Selecting text properly in textboxes when clicking vs tabbing.
All these little things might seem like a waste of time, but I know they're not. We will now be able to pick up speed on converting our windows. And the benefits will quickly spill over into things that people care about and things that create a bit more of a wow factor. One area that I'm really looking forward to is the Imaging module because we can move some images onto the graphics card to get better performance. Another area that I'm looking forward to is the 3D tooth chart, because it will finally get rid of the rare but still annoying random errors and will finally be rock solid.
Also, they've been pouring foundation footings for the last few weeks. Those are huge suckers. Seems like overkill, but whatever. We also got our street sweeper, so we can now keep all our driveways crisp and clean as the leaves start falling.
-Rich text box. Ours is very customized, with a number of dependent popup windows for things like autoNotes and quickPasteNotes. So there are lots of moving parts. It's maybe 50% done.
-Filter controls. This is chunk of code that someone wrote about 10 years ago, and I'm modernizing it. In certain windows, there are filters at the top for things like name and date. As you type, you want the grid below to update. But you also don't want it to lock up with each keystroke. So the db query goes on a background thread. It gives the window a nice clean feel. That overhaul is now done, so this won't be an obstacle to converting certain windows.
-Progress bar. We have our own progress bar window that comes up for longer operations. I've moved it over to WPF. I should review the threading one more time, but it's essentially done.
-Grid sorting by column.
-Selecting text properly in textboxes when clicking vs tabbing.
All these little things might seem like a waste of time, but I know they're not. We will now be able to pick up speed on converting our windows. And the benefits will quickly spill over into things that people care about and things that create a bit more of a wow factor. One area that I'm really looking forward to is the Imaging module because we can move some images onto the graphics card to get better performance. Another area that I'm looking forward to is the 3D tooth chart, because it will finally get rid of the rare but still annoying random errors and will finally be rock solid.
Also, they've been pouring foundation footings for the last few weeks. Those are huge suckers. Seems like overkill, but whatever. We also got our street sweeper, so we can now keep all our driveways crisp and clean as the leaves start falling.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
I've been working on more WPF details, including:
-added TreeView control with images
-added checkboxes to listBox for some situations.
-implemented Autonote composition in WPF, including the automatic yellow text selection.
-getting closer to finishing up our richText box.
This is already starting to pay off. This has already fixed one unsolvable problem from Windows Forms. The MS checkbox list did not support higher resolution, so some of these windows were unfixable prior to the switch to WPF. There will be more and more progress in other areas that were unfixable as we move forward. And we're building momentum so that we can pick up the pace.
I continue to spend a lot of time training all new engineers on proper patterns and variable naming. This is done by having them refactor existing code, which makes it more readable and more resistant to bugs. This training is something I just can't delegate. And, of course, I still personally approve all changes to OD proper and review all changes to the UI or db schema. I'm continuing to eye the perio chart for overhaul, but it will take some time because we must first fix all the badly written spaghetti code underneath. That includes the confusing way that the path advances. As we quietly clean up the code underneath, you'll eventually begin to see the improvements bleed into the actual UI.
We continue to work on various bridges. There are a number of companies offering AI analysis and annotation of x-rays. This would need to be built into Open Dental, unlike most bridges that just launch external software. We're attempting an integration, but no promises that it will happen quickly. I'm cautiously optimistic. It's not cheap though. It seems to run about $300 per month. The wow factor might be worth it for some offices even if it doesn't actually help with diagnosis.
I constantly search for AI to help me be more efficient. I've installed Tabnine, which has the slight advantage of marginally helpful autocomplete and the ability to highlight code and get an AI explanation which is usually a bit wrong. Because we only have 50 engineers instead of 100, Tabnine refuses to let us use their enterprise version that gets trained on the actual customer codebase and also can be trained to follow various patterns. It's probably just too expensive for them right now for it to make sense. That's what I'm really looking forward to, and I suspect that companies like Tabnine will expand these features down to smaller teams like ours over the next year or two.
They are building the first humanoid robot factory in the world right here in Salem, Oregon. At least that's how they've worded their press release. I don't think it's humanoid. The knees are backward and it has no fingers. The one thing humans do all day long that makes them so freaking efficient is that they juggle. The entire day is just sort of a juggling exercise. Robots must be able to juggle to be useful. The first truly humanoid robots manufactured on large scale are still a decade away at least. They won't be very good, but they might be able to help the cleaning crew or something.
-added TreeView control with images
-added checkboxes to listBox for some situations.
-implemented Autonote composition in WPF, including the automatic yellow text selection.
-getting closer to finishing up our richText box.
This is already starting to pay off. This has already fixed one unsolvable problem from Windows Forms. The MS checkbox list did not support higher resolution, so some of these windows were unfixable prior to the switch to WPF. There will be more and more progress in other areas that were unfixable as we move forward. And we're building momentum so that we can pick up the pace.
I continue to spend a lot of time training all new engineers on proper patterns and variable naming. This is done by having them refactor existing code, which makes it more readable and more resistant to bugs. This training is something I just can't delegate. And, of course, I still personally approve all changes to OD proper and review all changes to the UI or db schema. I'm continuing to eye the perio chart for overhaul, but it will take some time because we must first fix all the badly written spaghetti code underneath. That includes the confusing way that the path advances. As we quietly clean up the code underneath, you'll eventually begin to see the improvements bleed into the actual UI.
We continue to work on various bridges. There are a number of companies offering AI analysis and annotation of x-rays. This would need to be built into Open Dental, unlike most bridges that just launch external software. We're attempting an integration, but no promises that it will happen quickly. I'm cautiously optimistic. It's not cheap though. It seems to run about $300 per month. The wow factor might be worth it for some offices even if it doesn't actually help with diagnosis.
I constantly search for AI to help me be more efficient. I've installed Tabnine, which has the slight advantage of marginally helpful autocomplete and the ability to highlight code and get an AI explanation which is usually a bit wrong. Because we only have 50 engineers instead of 100, Tabnine refuses to let us use their enterprise version that gets trained on the actual customer codebase and also can be trained to follow various patterns. It's probably just too expensive for them right now for it to make sense. That's what I'm really looking forward to, and I suspect that companies like Tabnine will expand these features down to smaller teams like ours over the next year or two.
They are building the first humanoid robot factory in the world right here in Salem, Oregon. At least that's how they've worded their press release. I don't think it's humanoid. The knees are backward and it has no fingers. The one thing humans do all day long that makes them so freaking efficient is that they juggle. The entire day is just sort of a juggling exercise. Robots must be able to juggle to be useful. The first truly humanoid robots manufactured on large scale are still a decade away at least. They won't be very good, but they might be able to help the cleaning crew or something.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
Making some pretty fast progress in lots of little details:
-added WebBrowser control
-added SplitContainer control
-added properties to all windows for ShowMinMax, StartMaximized, HideHelpButton, etc.
-finished up translation framework
-improved the help button
-fixed window icons
-context menu shortcuts
-finished spellcheck
I feel like the pace is picking up. Display issues are gradually getting resolved and it's all working out well. I was really happy about how fast and easy it was to make the SplitContainer. That's the one that has the grab bar to move up and down with a pane on the top and another on the bottom. It was only about 20 lines of code and less than an hour. When I did the same thing last year in WinForms, it was 500 lines of code and took a week. This is so much faster. So when we get to the big complicated windows, the WPF toolset is really going to shine and allow some better splitters and docking layouts. I'm eager to get to those windows.
I leave TabNine turned off for the most part. The autocomplete suggestions are generally garbage. The chat window is inferior to ChatGPT. Oh well.
There's a new startup that's trying to build Robotic Process Automation:
https://techcrunch.com/2023/10/04/rabbi ... are-works/
Of course google and others have also been trying. I feel like the breakout moment is about two years away. Can't happen soon enough.
-added WebBrowser control
-added SplitContainer control
-added properties to all windows for ShowMinMax, StartMaximized, HideHelpButton, etc.
-finished up translation framework
-improved the help button
-fixed window icons
-context menu shortcuts
-finished spellcheck
I feel like the pace is picking up. Display issues are gradually getting resolved and it's all working out well. I was really happy about how fast and easy it was to make the SplitContainer. That's the one that has the grab bar to move up and down with a pane on the top and another on the bottom. It was only about 20 lines of code and less than an hour. When I did the same thing last year in WinForms, it was 500 lines of code and took a week. This is so much faster. So when we get to the big complicated windows, the WPF toolset is really going to shine and allow some better splitters and docking layouts. I'm eager to get to those windows.
I leave TabNine turned off for the most part. The autocomplete suggestions are generally garbage. The chat window is inferior to ChatGPT. Oh well.
There's a new startup that's trying to build Robotic Process Automation:
https://techcrunch.com/2023/10/04/rabbi ... are-works/
Of course google and others have also been trying. I feel like the breakout moment is about two years away. Can't happen soon enough.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
I've been very productive. Spent a lot of time working with other engineers on interesting and complex jobs. A few examples:
-Perio chart refactoring is going nicely. The code is gradually getting more sane and less spaghetti. This is critical before any actual improvements can be made.
-FAQ at the bottom of some manual pages is getting some love. Older FAQs were not being brought forward for the last few versions like they should have, so we worked on the database to get them all current.
-Continuing to remove all the Cancel buttons throughout the program because they are redundant. Windows can be closed with the X at the upper right.
Personally, I worked on converting the following to WPF:
-ComboClinic. This is a very complex dropdown control that internally enforces clinic security.
-MonthCalendar. This is the one used at the UR of Appts Module and in DatePicker dropdowns. The WPF version is now better than the original, with very good resizing capabilities.
-DatePicker. The popup MonthCalendar is a smaller size, and the popup is an actual window instead of a fake window. This makes it more powerful. For example, it can spill outside of windows.
Time to go add filled polygons to Image module drawing.
-Perio chart refactoring is going nicely. The code is gradually getting more sane and less spaghetti. This is critical before any actual improvements can be made.
-FAQ at the bottom of some manual pages is getting some love. Older FAQs were not being brought forward for the last few versions like they should have, so we worked on the database to get them all current.
-Continuing to remove all the Cancel buttons throughout the program because they are redundant. Windows can be closed with the X at the upper right.
Personally, I worked on converting the following to WPF:
-ComboClinic. This is a very complex dropdown control that internally enforces clinic security.
-MonthCalendar. This is the one used at the UR of Appts Module and in DatePicker dropdowns. The WPF version is now better than the original, with very good resizing capabilities.
-DatePicker. The popup MonthCalendar is a smaller size, and the popup is an actual window instead of a fake window. This makes it more powerful. For example, it can spill outside of windows.
Time to go add filled polygons to Image module drawing.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
The steel framework is going up for Bldg H. Always exciting to watch the welders.
I've asked ChatGPT questions about how to use Open Dental, and the answers are garbage. I'm going to try CustomGPT, but my expectations are low. We all know what we want, but I just think nobody has been able to build it yet. Once we can build a proper chatbot that our techs have access to, they should be able to respond more efficiently. Eventually, we'll feed the AI massive amounts of data in the form of recorded phone calls and screen sharing sessions to make it smarter. The symbiosis of humans and AI should be impressive.
I had a great idea about how to move faster with the WPF conversion. I'm going to start at the other end. I'm going to pick one module, and convert all windows that that module depends on. This also avoids having to convert all the setup windows, which nobody ever really needs to get into very often. I've been noticing some slowness and artifacts in the Imaging module at 4K, so I think that's the place to start. We should see a nice increase in responsiveness.
I've asked ChatGPT questions about how to use Open Dental, and the answers are garbage. I'm going to try CustomGPT, but my expectations are low. We all know what we want, but I just think nobody has been able to build it yet. Once we can build a proper chatbot that our techs have access to, they should be able to respond more efficiently. Eventually, we'll feed the AI massive amounts of data in the form of recorded phone calls and screen sharing sessions to make it smarter. The symbiosis of humans and AI should be impressive.
I had a great idea about how to move faster with the WPF conversion. I'm going to start at the other end. I'm going to pick one module, and convert all windows that that module depends on. This also avoids having to convert all the setup windows, which nobody ever really needs to get into very often. I've been noticing some slowness and artifacts in the Imaging module at 4K, so I think that's the place to start. We should see a nice increase in responsiveness.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
CustomGPT worked. But by worked, I mean it's about 80% accurate. It can spit out great responses that are relevant to OpenDental. The responses are not word-for-word from our manual, but are reorganized and summarized as only ChatGPT can so elegantly do. The main problem is the 20% that's wrong. You flat out cannot trust the answers it gives. And even the 80% that is accurate isn't very nuanced. Our techs give responses that are far more intelligent and creative. So the frequent news stories about AI taking jobs absolutely will not apply to Open Dental employees for at least 10-20 yrs. I think one place where this tool might be useful first is to help our newer techs find solutions slightly faster. Or maybe not. I read an interview by the guy who built ChatGPT. He says it's probably not very useful other than for one thing: it lets us dream about the potential. Once you use it, your idea of how far it could go some day changes dramatically. But ChatGPT itself is really only a slight improvement from last year's version. They have not made any sudden breakthroughs, but have instead just made very gradual incremental progress. Each year, they think they might be at the end, but then they come up with another trick to get slightly better. This will continue for decades into the future, and we will gradually have more useful AI. I use ChatGPT heavily to help me with programming, but it's not ready to use within Open Dental -- not yet. So I can just relax and continue to monitor the situation. No FoMO.
Work has begun on overhauling the Imaging module to use WPF, starting with a variety of the child windows. Looks like we have about 15 core issues to work on, about 10 controls to build, and about 23 windows to convert. If we do this right, you will not notice any difference at all... unless you are on a 4k+ monitor, and then you should notice much better speed and no scaling artifacts. It should only take a few months. Subsequent modules should go significantly faster due to all the lessons we learn on the first one.
Work has begun on overhauling the Imaging module to use WPF, starting with a variety of the child windows. Looks like we have about 15 core issues to work on, about 10 controls to build, and about 23 windows to convert. If we do this right, you will not notice any difference at all... unless you are on a 4k+ monitor, and then you should notice much better speed and no scaling artifacts. It should only take a few months. Subsequent modules should go significantly faster due to all the lessons we learn on the first one.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
We've been working hard on conversion of the Imaging module. I've personally been working on the ImageSelector at the left. It's just about done. This has required a lot of work on peripheral issues like the ToolTip, vector drawing with SVG-style commands, and some threading to load thumbnails quickly. At the same time, I've been managing about 4 other engineers who are migrating the various windows, one by one. I estimate we are about 1/3 done, but as always, the pace tends to pick up after everyone gets proficient at the process.
It turns out that MS lets us nest WinForm and WPF controls inside each other with no limitations on the depth of nesting. This could open up some interesting strategies, but we still have that problem of WinForms controls not scaling correctly at high dpi, so we really don't want any of them sticking around. There are also a lot of windows that we just don't care about too much, like old EHR windows. I'm trying to figure out a way to dump windows like that somewhere that they will still function and not need a conversion.
Today, it's more vector graphics. I built a tool about 5 years ago to let us import SVG icons by generating C# code. The code generation is ready to be enhanced to support WPF instead of WinForms. This is fun. This lets us get away from the old Direct2D and do all our drawing without any outside dependency to C++ libraries.
It turns out that MS lets us nest WinForm and WPF controls inside each other with no limitations on the depth of nesting. This could open up some interesting strategies, but we still have that problem of WinForms controls not scaling correctly at high dpi, so we really don't want any of them sticking around. There are also a lot of windows that we just don't care about too much, like old EHR windows. I'm trying to figure out a way to dump windows like that somewhere that they will still function and not need a conversion.
Today, it's more vector graphics. I built a tool about 5 years ago to let us import SVG icons by generating C# code. The code generation is ready to be enhanced to support WPF instead of WinForms. This is fun. This lets us get away from the old Direct2D and do all our drawing without any outside dependency to C++ libraries.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
Finished up a lot of details on the ImageSelector. The new one completely replaces the old one already, and I can't tell any difference at all. It admittedly took quite a bit longer than I planned. In the process, I perfected a technique for setting the scale and zoom of these nested WPF controls. The vector drawing was a huge success, so now that framework is all in place and can be easily expanded.
I also figured out how to reduce the number of windows that we need to convert for the Imaging Module. A big initial problem was that each window had other windows that it called, and sometimes you would hit a window that would just explode the number of descendants that would need to be converted. The solution to this is to fork a window like that and make it slightly less functional so that it doesn't have all those descendants. For example, if exporting an image to a different patient, we will need to use the Patient Select window. But that window has something like a hundred descendant windows because you sometimes use that window to add a new patient. But we don't need the functionality for adding a new patient just for exporting an image. So for the Patient Select window, we'll implement a WPF version that looks identical but is just for selection. That doesn't actually change the UI at all because there was already a flag in that situation to not allow adding a new patient. The point is that we are getting better and faster at this, like I knew we would.
As I was poking around at things to convert to WPF, I noticed that we do a lot of graphics drawing for various things. I think I can build a very quick class that lets us keep all that code and just reuse it. But instead of drawing to GDI+, it would draw by creating WPF objects on a canvas. Seems really simple. It's the same strategy I just used for vector drawing and it was really easy.
I also figured out how to reduce the number of windows that we need to convert for the Imaging Module. A big initial problem was that each window had other windows that it called, and sometimes you would hit a window that would just explode the number of descendants that would need to be converted. The solution to this is to fork a window like that and make it slightly less functional so that it doesn't have all those descendants. For example, if exporting an image to a different patient, we will need to use the Patient Select window. But that window has something like a hundred descendant windows because you sometimes use that window to add a new patient. But we don't need the functionality for adding a new patient just for exporting an image. So for the Patient Select window, we'll implement a WPF version that looks identical but is just for selection. That doesn't actually change the UI at all because there was already a flag in that situation to not allow adding a new patient. The point is that we are getting better and faster at this, like I knew we would.
As I was poking around at things to convert to WPF, I noticed that we do a lot of graphics drawing for various things. I think I can build a very quick class that lets us keep all that code and just reuse it. But instead of drawing to GDI+, it would draw by creating WPF objects on a canvas. Seems really simple. It's the same strategy I just used for vector drawing and it was really easy.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
I can't believe it's been a month since my last post. The thing I'm most excited about is that I came up with yet another trick to remove the need to convert windows to WPF if they are too hard right now. I'm using a global static event that FormOpenDental subscribes to. Any WPF window can call a method that raises this event. FormOpenDental responds by launching the requested window. So now we're going backward and have access to forms that we shouldn't normally because of dependencies. This is speeding up development tremendously. So we're just about to convert the actual Imaging module itself. It's big and scary: 2 files with about 9000 lines of code. Once we start, it's an all or nothing process. It will take many weeks. I'm trying to figure out how to do it as incrementally as possible.
Also, they started pouring the 2nd and 3rd concrete floors for BldgH. The pour takes 8 days, and I think they've done maybe 2 or 3. After construction started, I decided we needed covers over the sidewalks. It rains a lot in Oregon, and we need to constantly walk between buildings. Those covers are much more involved than I imagined. There's an issue with lateral forces near a retaining wall. Ugh.
Also, they started pouring the 2nd and 3rd concrete floors for BldgH. The pour takes 8 days, and I think they've done maybe 2 or 3. After construction started, I decided we needed covers over the sidewalks. It rains a lot in Oregon, and we need to constantly walk between buildings. Those covers are much more involved than I imagined. There's an issue with lateral forces near a retaining wall. Ugh.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com
- jordansparks
- Site Admin
- Posts: 5751
- Joined: Sun Jun 17, 2007 3:59 pm
- Location: Salem, Oregon
- Contact:
Re: Jordan's blog
Conversion of the actual Imaging module window is underway in stages. Many controls have been converted, including the image selector at the left, the windowing slider, and the zoom slider. The toolbars have been overhauled again, with more icons. Nobody should be able to see any difference at all with all of this, which is a little sad. Today, I was able to add an entirely unrelated feature: snap layouts. That's when you hover over the maximize button of a window in Windows 11 and you get a popup window to let you choose how to snap the window to different locations. It's nice to finally have that out of the way. I don't like their options, though. My main complaint is that I want a "restore" button to get the window back to its original position, but that's sort of minor.
Apple Vision Pro went on sale yesterday. I was going to buy one to increase my productivity, but I had gradually decided that it would not do that. For me, the main use case would be watching 3D movies finally for the first time ever like they were meant to be watched. But I don't have time to watch movies, so no rush. Instead, I got an Xreal Air 2, which looks more like a pair of sunglasses and is so much cheaper. They have the advantage that they are open on the bottom so that you can see your keyboard directly, with the screen only in the top section of the glasses. That's the future. They are only HD resolution, compared to the Apple Vision Pro 4K, but it won't be long until 4K is available in the smaller form factor. Nobody is going to walk around with a big ski goggle on their face.
Humanoid robots are coming. I watch the progress constantly. Phenomenal progress has been made on the brains that will power them. About a year ago, the various companies tossed out all their complicated C++ code and started over. The new paradigm is called end-to-end learning. This is where the robots learn everything on their own through a variety of methods. They have had success teaching them with human teleoperation, showing them videos of humans doing the tasks, and full self training. The last option is really cool. Nvidia built a VR environment called IsaacSim which has very good realistic physics. They tell the virtual robot what the goal is, and it spends as much virtual time as necessary to progressively learn to get better through trial and error. They do the same thing for 10,000 different photorealistic environments. Then, they take an identical real robot and put it in the real world, which it just thinks is environment 10,001. The robots are really good at that point. They can literally juggle like a pro, which I've always said is how we'll know they're ready. The fundamental problems for robotics have all been solved. Within a year, we will see humanoid robots working in factories, and then they will spill out into the rest of the world. They will be able to work 24/7 in dirty unpleasant jobs. Because of the curiosity and self reinforcement that will be built in, they will continually get better and better. This is all going to happen before self driving cars and before widespread use of VR headsets. We are going to need a universal basic income earlier than we thought.
Apple Vision Pro went on sale yesterday. I was going to buy one to increase my productivity, but I had gradually decided that it would not do that. For me, the main use case would be watching 3D movies finally for the first time ever like they were meant to be watched. But I don't have time to watch movies, so no rush. Instead, I got an Xreal Air 2, which looks more like a pair of sunglasses and is so much cheaper. They have the advantage that they are open on the bottom so that you can see your keyboard directly, with the screen only in the top section of the glasses. That's the future. They are only HD resolution, compared to the Apple Vision Pro 4K, but it won't be long until 4K is available in the smaller form factor. Nobody is going to walk around with a big ski goggle on their face.
Humanoid robots are coming. I watch the progress constantly. Phenomenal progress has been made on the brains that will power them. About a year ago, the various companies tossed out all their complicated C++ code and started over. The new paradigm is called end-to-end learning. This is where the robots learn everything on their own through a variety of methods. They have had success teaching them with human teleoperation, showing them videos of humans doing the tasks, and full self training. The last option is really cool. Nvidia built a VR environment called IsaacSim which has very good realistic physics. They tell the virtual robot what the goal is, and it spends as much virtual time as necessary to progressively learn to get better through trial and error. They do the same thing for 10,000 different photorealistic environments. Then, they take an identical real robot and put it in the real world, which it just thinks is environment 10,001. The robots are really good at that point. They can literally juggle like a pro, which I've always said is how we'll know they're ready. The fundamental problems for robotics have all been solved. Within a year, we will see humanoid robots working in factories, and then they will spill out into the rest of the world. They will be able to work 24/7 in dirty unpleasant jobs. Because of the curiosity and self reinforcement that will be built in, they will continually get better and better. This is all going to happen before self driving cars and before widespread use of VR headsets. We are going to need a universal basic income earlier than we thought.
Jordan Sparks, DMD
http://www.opendental.com
http://www.opendental.com