The Case For Change & Accountability at the Holmdel BOE

I have been a resident of Holmdel since 2017. Ever since we came here, I have seen our schools underperform. When I reached out to the BOE and its current members, I did not receive a constructive response to work together to improve what I saw as issues. 

I am a little concerned that so far, we have been playing a game of musical chairs. For example, Dr McGarry, who was the previous superintendent, was the one driving the Holmdel 2020 initiative. We took his word for it and voted in favor of this investment. He left shortly after without an explanation to the community or providing a status on how the project implementation was going. Now some of the current board members say that this was before they were elected to avoid accountability. 

The way I see this – The school board has a fiduciary duty to the town’s residents. Even if this was before a board member’s time, I expect the board member to have made a determination on whether the project in question was on track and was achieving its objectives when they came onboard. If it was, great! If it wasn’t, then the board needs to course correct and report back to the community on what was missing and how things were going to be brought back on track. Saying it was before your time should never be acceptable!   

As you can tell from the following stats, our dollars do not go as far in improving the quality of education or rankings as other school districts. I have not included the vocational school districts in this comparison, so that we are comparing our school to similar suburban high schools fairly.  We seem to spend more money per student, but are still ranked lower than a lot of other public high schools. 

Data from:  https://www.schooldigger.com/go/NJ/schoolrank.aspx?level=3


I will not belabor the point by listing every school that is above us here, but I assume you get the point. 

The Holmdel 2020 Initiative: 

I understand that we had not invested in school infrastructure upgrades for a while, and it’s justified to invest in giving our kids better facilities. But we should also be interested in tracking the ROI on our investment of $40M in this project. How much of these funds went into improving the metrics that drive our rankings? And having made this investment, the least that the community should expect is for the board to share how these metrics have trended quarter over quarter, so we have the confidence that the quality of education is improving and so will our rankings in the future. Anyone who has asked the town to make an investment in the school makes the case that improved rankings will drive up home values and therefore result in better returns for tax payers. We haven’t seen any such improvements in Holmdel yet! 

Holmdel Metrics from US News and World Report High – School Evaluation Criteria: 

These are the metrics that drive our US News rankings and here’s our current evaluation on each metric: 

Source:

Current: 

https://www.usnews.com/education/best-high-schools/new-jersey/districts/holmdel-township-school-district/holmdel-high-school-12585#test_scores_section

2016:

https://web.archive.org/web/20161014054840/http://www.usnews.com/education/best-high-schools/new-jersey/districts/holmdel-township-school-district/holmdel-high-school-12585


As you can see, there is a precipitous drop in Math proficiency since 2016. The reading proficiency has also fallen in this period. The obvious question to ask here is, have we lost a number of our good teachers? Each of us has anecdotally heard of teachers leaving or retiring. It would certainly make sense for the board to share these attrition statistics.  Also, when filling these positions, do we strive to maintain the same level of seniority/expertise when we hire replacement teachers? Are our kids getting as good of an education as before? 

Another argument that is used to explain away similar performance drops is that we lose a lot of students to the county’s vocational school district schools. That is not really relevant here to explain this drop since the vocational schools existed in 2016 as well, and were roughly pulling in a similar cohort from the high school. 

We have had an interim superintendent for more than a year. I very much like Dr. Seitz and I believe we are doing him a disservice by keeping this position temporary. I would like to know whether the board has confidence in him? If it does, please confirm this as a permanent position or if it does not, please begin a search for a permanent superintendent so that we can have accountability. Again, I am against playing musical chairs where no one takes responsibility. 

One final item that is often mentioned by our current board members is that we were the platinum standard for school reopening. Before coming to that conclusion, I would like to see the studies that the board has reviewed (preferably double blind) before making investments into “UV-C lighting, bi-polar ionization filtration units,  antimicrobial coatings on commonly used surfaces, retro-commissioned the District HVAC systems”. Lack of having a cluster of cases does not indicate that these measures are the ones responsible for this (correlation does not imply causation). As you realize, a number of us chose to keep our kids home last year. Before the start of this school year we had vaccines available to our kids 12 years or older. How do we know that these factors have not had an effect in preventing clusters of infections as did our investments into infrastructure improvements? Attributing it to a single factor is not supportable without appropriate studies. 

I prefer using data to come to meaningful conclusions, because if you cannot measure it, you cannot improve it. And I haven’t seen meaningful metrics being tracked by the board or shared with us. 

That said, I believe that it is time to bring change and new blood to the school board, so that it can be more receptive to our concerns about getting our schools to prepare our kids for a much more competitive future. My advice to the parents in Holmdel – please talk to each of the candidates to make up your mind on who can/will bring about this change. You should specifically be asking each of them what changes they intend to bring about, and what metrics will they track/measure regarding these changes so that we as a community can determine if we are on track and how will they transparently share these metrics at each BOE meeting over their term. 

We owe this to our children! 

Why We Chose To Move Our Analytics Platform To The Cloud?

Our analytics platform uses only corporate data stores and services only internal users and use cases. We chose to move it to the cloud. Our reasoning for doing this is as follows…

The usual perception amongst most enterprises is that cloud is risky, and when you have assets outside the enterprise perimeter, they are prone to be attacked. Given the mobility trends of our user population – both from the desire to access every workload from a mobile device that can be anywhere in the world as well as the BYOD trends, it is increasingly becoming clear that peripheral protection networks may not be the best solution given the large number of exceptions that have to be built into the firewalls to allow for these disparate use cases.

Here are some of the reasons why we chose otherwise:

  • Cost: Running services in the cloud is less expensive. Shared resources, especially when you can match dissimilar loads would be much more efficient to utilize and therefore would cost less. It makes sense to go to a cloud provider, where many clients have loads that are very different and therefore it makes sense for them to share resources compute and memory, thus reducing the total size of hardware needed to manage their peak utilization. During my PBM/Pharmacy days, I remember us sizing our hardware with a 100% buffer to be able to handle peak load between Thanksgiving and Christmas. But that was so wasteful and what we needed was capacity on demand. We did some capacity on demand with the IBM z-series, but then again our workload wasn’t running and scaling on commodity hardware and therefore more expensive than a typical cloud offering.
  • Flexibility: Easy transition to/from services instead of building and supporting legacy, monolithic apps.
  • Mobility: Easier to support connected mobile devices and transition to zero trust security.
  • Agility: New features/libraries available faster than in house, where libraries have to be reviewed, approved, compiled and then distributed before appdev can use them.
  • Variety of compute loads: GPU/TPU available on demand. Last year when we were running some deep learning models, upon introducing PCA our processing came to a halt given the large feature set we needed to form principal components out of. We needed GPUs to speed up our number crunching, but we only had GPUs in lab/poc environments and not in production. Therefore we ended up splitting our job into smaller chunks to complete our training process. If we were on the cloud this would have been a very different story.
  • Security: Seems like an oxymoron, but as I will explain later, cloud deployment is more secure than deploying an app/service within the enterprise perim.
  • Risk/Availability: Hybrid cloud workloads offer both risk mitigation and high availability/resilience.

Let’s consider the security aspect:

The cost of maintaining and patching a periphery based enterprise network is expensive and risky. One wrong firewall rule or access exception can jeopardize the entire enterprise. Consider the following:

A typical APT (Advanced Persistent Threat) has the following attack phases:

  • Reconnaissance & Weaponization
  • Delivery
  • Initial Intrusion
  • Command and Control
  • Lateral Movement
  • Data Ex-filteration

An enterprise usually allows for all incoming external email. Usually most organizations do not implement email authentication controls such as DMARC (Domain-based Message Authentication, Reporting & Conformance). You would also have IT systems that implement incoming file transfers, so some kind of an FTP gateway would be provided for. You also see third party vendors/suppliers coming in through a partner access gateway. Then you also have to account for users within the company that are remote and need to access firm resources remotely to do their job. That would mean a remote access gateway. You also do want to allow employees and consultants that are within the enterprise perim , to be able to access most websites and content outside implying you would have to provision for a web gateway. There would also be other unknown gateways that had been opened up in the past, that no one documented and now people are hesitant to remove those rules from the firewall since they do not know who is using it. Given all these holes built in to the enterprise perim, imagine how easy it is allow for web egress of confidential data, through even a simple misconfiguration of any of these gateways. People managing the enterprise perim have to be right every time while the threat actors just need one slip up.

As you can tell from the above diagram, the following sequence could result in a perimeter breach:

  1. Email accepted from anyone without regard to controls such as DMARC.
  2. Someone internal to the perimeter clicks on the phish and downloads malware on their own machine.
  3. Easy lateral traversal across the enterprise LAN, since most nodes within the perimeter trust each other.
  4. Web egress of sensitive data allowed through a web gateway.

On the other hand consider a Micro-Segmented Cloud workload which provides multi layer protection through DID (defense in depth). Since the workload is very specific, the rules on the physical firewall as well as in the virtual firewall or IPS (Intrusion Protection System) are very simple and only need to account for access allowed for this workload. Additional changes in the enterprise does not result in these rules being changed. So distributed workload through micro-segmentation and isolation actually reduces risk of a misconfiguration or a data breach.

Another key aspect to consider is the distribution of nodes. Looking at botnets, they have an interesting C&C (Command and Control) design; if the C&C node is knocked out, another node can assume the C&C function and the botnet continues its work attacking. Similarly cloud workloads can also implement the same C&C structure and if one of the C&C nodes is victim to a DoS (Denial of Service) attack, another can assume its role. Unlike an enterprise perimeter, a cloud deployment does not have a single attack surface and associated single point of failure, and a successful DoS attack does not mean that all assets/capabilities are neutralized, rather other nodes take over and continue to function seamlessly providing high availability.

It’s time to embrace the micro segmented cloud native architecture where a tested and certified set of risk controls are applied to every cloud workload. The uniformity of such controls, also allow risks to be managed much easier than the periphery protection schemes that most enterprises put around themselves.

Another important trend that enterprises should look to implement is zero trust security. The following diagram represents the responsibilities for each layer/component when implementing zero trust security. A very good writeup from Ed Amaroso on how zero trust security works is here.

My Research Project: Microbial Fuel Cells

For my sophomore research class, I did a project testing the effect of anode surface area on microbial fuel cells, or MFCs. MFCs are an experimental fuel cell type that use bacterial respiration to generate electricity. Basically, in an MFC, bacteria and sugar are placed in a chamber without oxygen with two electrodes on either side. The bacteria form a biofilm on the anode side and transfer electrons from anaerobic respiration to the anode. The cathode, which is made of platinum, is exposed to air. The platinum cathode uses the flow of electrons through the cathode to catalyze the reverse half reaction in the cell. I 3D printed two MFCs for this project. One of my MFCs had a larger surface area anode, which I hypothesized would increase biofilm formation and thus power output. As my bacteria source, I used pond sludge and mixed it with a glucose solution. I ran both fuel cells for two days, taking power readings every minute.

This was one of the most rewarding and exhilarating experiences of my life. I learned a lot about the engineering design process while designing the fuel cells. I also learned a lot about electrochemistry and circuit design while learning about MFCs. While my idea did not show that the high surface area anode increases power output, I would like to continue this project and make modifications to it to see if I can increase power output. One idea that I have is to coat the anode with gold nanoparticles, which should increase surface area at a nanoscale, which could improve biofilm formation. I really enjoyed this project and I am planning to do much more like this in the future.

You Tube Video describing the project

Daily Notes

5/3/19

This is pretty disturbing – the Barium group is able to hack into supply chains with ease an infect a very large number of users.

4/11/19

While I understand why we need humans to weed through pictures to tag objects for training algorithms on image recognition; this latest from Amazon may be going too far for what I am assuming is NLP algos. Folks need to be concerned about privacy and ownership of data before jumping all in towards more intelligent algorithms. No wonder I will never agree to get a voice assistant at home (even if offered for free) unless it’s built by me and all it’s data is under my control.

4/5/19

Good read on EverCrypt that mathematically assures that its cryptographic tools cannot be hacked either through coding errors, buffer overflows or other side channel attacks.

2/26/19

Another report of Supermicro hardware weaknesses leading to compromised bare metal servers between leases for cloud customers.

2/25/2019

Encouraging story on an Accenture initiative to use blockchain to maintain complete supply chain transparency all the way to the end consumer on how sustainably/ethically a product is produced; thus allowing us to reward behaviors that we support and punish those that we want to discourage using our wallets.

2/23/2019

Absolutely heartbreaking CNN story of what we do to people in the name of Capitalism. What I do not understand is – how does approval for Firdapse allow for Catalyst pharmaceuticals to stop the production of 3,4-DAP by Jacobus Pharmaceutical for existing patients? Catalyst is dumping the existing users of the older drug to Charities, Endowments and other mechanisms to bear their new list price at $375,000/year – how can our legal frameworks even allow that and send someone to a slow inevitable decline/death? What if, for some macro economic/socio-political reason there wasn’t enough $s in charities & foundations to support the LEMS community? Why does a capitalistic company get to use charity to support its business practices? Have we lost our humanity?

2/13/2019

Martin Wolf on cryptocurrencies in the FT

Sounds promising: New study from MIT that we could afford to extract CO2 out of the atmosphere after all in a cost effective way.

2/12/2019

A colleague pointed out a couple of very interesting articles regarding Trust, well worth a read:

Reflections on Trusting Trust: Ken Thompson from August 1984

Schneier on Blockchain and Trust

2/4/2019

New threat to personal information from the Iranian cyber espionage group APT39 from the FireEye security threat blog.

2/2/2019

Talk about managing key person risk – Crypto exchange QuadrigaCX unable to pay back $190MM in its client holdings because its founder Gerald Cotten who had the only keys to its cold storage died unexpectedly while visiting India. For nascent industries like bitcoin exchanges, something like this massively erodes trust which will be very hard to rebuild for the other players in the space.

1/29/2019

Very interesting article from Kashmir Hill at Gizmodo who tried to cut out the big 5 tech giants (Amazon, Facebook, Google, Microsoft and Apple) from her and her family’s lives with interesting results. Goes to show you the monopoly on services these tech giants enjoy, that are critical to key aspects of our daily lives – whether it be using maps to navigate, or consuming news or listening to streaming content.

12/1/2018

Edward Snowden explains Blockchain

11/28/2018

Came across a very interesting blog post (The Internet Needs More Friction) today – reminding us that the lack of speed or adding friction can be a winning strategy in itself. Remember Brad Katsuyama’s Thor tool at RBC described in Flash Boys by Michael Lewis to prevent scalping on large orders by HFT Systems.  

11/26/2018

Three really big pieces of news today – 

  1. Announcement of two babies born with gene editing techniques using CRISPR Cas9 – the genie is out of the bottle and the question no longer is should we do this …
  2. GM in looking at the future announced cutting off 14000 management and factory jobs, closing 7 plants and rechanneling investments. CEO Mary Barra said the company has invested in newer architecture for those vehicles, and that it’s planning to hire people with expertise in software and electric and autonomous vehicles. Seems eerily as the beginning of some of the predictions in my blog post here with large scale job loses from automation and autonomous vehicles
  3. Insight lands on mars 

11/23/2018

US climate change impact report paints a dire picture

How can you relate ML algorithms to your business?

When you start talking about Machine Learning algorithms, most people’s eyes glaze over – it’s higher order math, something cool and distant that they don’t want to be bothered with.

So how can you engage in a meaningful conversation about these algorithms and demonstrate how and why they add value to make the case for implementing them? 

We have had success in showing measurable and quantifiable results from predictions coming from these algorithms that are better than the current state process. Now, everything is not as easy to measure or demonstrate. Hence classifying your ML models into categories and having a measurement framework around each category helps. 

The following elements are critical in establishing such a measurement system: 

  • A continuously executable platform that can evaluate rule sets and persist results.
  • Rule set authoring, configuration, promotion and execution to be configurable and on demand.
  • Measurable results that are persisted and can be audited.
  • Continuous Improvement: A framework to establish a Champion-Challenger framework, where a production population is using the “champion” set of rules, while a pilot set is using the “challenger”. When we see the challenger doing statistically better than the champion, we have the ability to flip the two. 

I have found the following classification useful for the work that my team is doing. Now this isn’t a comprehensive framework of all ML models available, but just something that we have found useful: 

  • Clustering: The technique of dividing a set of input data into possibly overlapping subsets where the elements of each subset are considered related by some similarity measure. The measures we have used here is commonality by department, division, cost center etc. Typical implementations have used DBSCAN (density based spatial clustering of applications), K-spanning tree, Kernel k-means and shared nearest neighbor clustering algorithms. Typical use cases we have put to clustering are role classification, entitlement clustering etc. 
  • ARM (Associative Rule Mining): Is a technique to uncover how items are associated to one another. Typically calculate three measures – support (P(A) = occurrences(A)/total ), confidence (confidence(B/A)=P(B,A)/P(A)) and lift (lift(B/A) = support(B,A) / support(B)). We have used it to predict what rights should be offered up to a new employee who joins a group or department. 
  • Recommender Systems: We have seen success with collaborative filtering (with both item based and user based). UBCF assumes that a user will have a similar rating for an item as its neighbor, if they are similar; while IBCF focusses on what items from all options are more similar to what a user enjoys thus allowing us to directly recalculate the similarity between co-rated items and skip the k-neighborhood search. A key measure we have used in this space to evaluate algorithms is to measure how many times did a user act on a suggestion that was surfaced by a recommender system. We have used this to recommend other items that a user could request with their original request.  
  • Market Basket Analysis: Technique to uncover association between items by looking at combinations of items that occur together frequently in transactions, to predict the occurrence of an event happening given the occurrence of another.  
  • Neural Networks: Models which are trained to mimic the behavior of a system. The weights in the model are tuned using the training set till we start getting realistic predictions for new data that the model has not experienced. Some measures here are to keep a record of actual events vs. predicted to measure variance and use continuous feedback to improve predictions. We have plans to use this to learn from system events on the network to infer processes to run in response. 

Will keep you posted on the results. Happy to hear about alternate frameworks and paradigms that folks have used to gain acceptance.