Here we hastily created a project on the AR (ActeveRecord) requests through it, write, and update. To start a new project, I think this is quite the rules, if that project might not take off.
But the project lives model AR swell logic. Growing base, with it the volume of information being processed. And you have to wonder how the logic out of AR (behaviors, AfterSave, BeforeSave) in the first place, so it can be used with batchInsert and batchUpdate.
A slight digression, regarding logic, examples:
- Changed the order status, to send notifications to a mail, telegram, etc. To record the history of changes in the amount of the order, status.
- Update/delete the cache or update the index in elasticSearch
- Send to moderation data
- And everything
Any idea what all this must be put in handlers and events, such as what the class is engaged in processing of changed data (compare old data with new) and generates events, and perform actions on events have to be processed somewhere in the queue separately, so the current task batchInsert and batchUpdate in a few thousand records is not hindered.
But how it should look like architecturally, what are the nuances and pitfalls?
I think I'm not the first who asked this question, and there are already some architectural decisions.
I would like to see some examples, because it's all simply create a service, add the repository, in the same observers. But in practice, no experience is not just