From bf9e76793aab5870da98bf49b5fb6c6eed82625f Mon Sep 17 00:00:00 2001 From: Manish Date: Thu, 14 May 2015 17:23:05 +0530 Subject: [PATCH] Handle error from XMPP session send. Problem: If XMPP channel send fails which internally(TCP send) translates into a defer send operation, then agent controller module was treating this as a failure. In case of route update which is seen in bug any such defer will result in exported flag for route set to false. Since it is set to false, later on route delete the unsubscribe for this route will not be sent. However update of route was deferred and would have gone in some time but control node will never get delete which will result in issue mentioned by the bug. Solution: Ideally all the failed send should be used to enqueue further requests to control node and replay them whenever callback(writereadycb) is called. In this way agent will not overload socket. However as a quick fix the error will not be used to judge the further operation after send is done. This is in asumption that send will always be succesful. Currently following messages are sent: 1) VM config sub/unsub 2) Config subscribe for agent 3) VRF sub/unsub 4) Route sub/unsub. Connection not present will be taken care by channel flap handling. Change-Id: Ib6e0856b5c689b51209add4ab459b8bd2e952143 Closes-bug: 1453483 (cherry picked from commit 0a70915fd3bc1954154e657f59123d1a4597f2a4) VRF state not deleted in DelPeer walk. Problem statement remains same as in this commit: https://github.com/Juniper/contrail-controller/commit/8e302fcb991c8f5d8f5defb85b9851f8cde5f479 However above commit does not solve the issue. Reason being, walk count was being incremented on enqueue of walk but when walk is processed it calls Cancel for any previously started walk. This Cancel decrements the walk count. This defeats the purpose of moving walk count increment to enqueue in above commit. Also consider a walk for VRF where there are four route tables. This should result in walk count to be 5 (1 for vrf table + 4 for route tables). With above fix this will be 2 (1 for Vrf + 1 for route table). It didnt take into consideration that route walk count needs to be incremented for each route table. Solution: Use a seperate enqueue walk count and restore the walk_count as it was before the above commit. Use both of them to check for walk done. Closes-bug: 1455862 Change-Id: I8d96732375f649e70d6754cb6c5b34c24867ce0c (cherry picked from commit 705165854bbdaff9c47a2a9443410eec53e4fb37) Multicast route gets deleted when vxlan id is changed in configured mode Problem: In oper multicast if local peer vxlan-id is changed then there was add issued for route with new vxlan and delete issued for same with old vxlan. Since the peer is local the path search only compares peer and not vxlan. This results in deletion of local path and eventually the multicast route. Solution: Need not withdraw path from local peer on vxlan id change. Just trigger update of same. This will result in controller route _export call which in turn using state set on flood route, will be able to identify that vxlan id is changed and it will take care of withdrawal for old vxlan and update with new vxlan. Change-Id: I3afeddd2620615bb477aec5a0c6715fcdc99352b Closes-bug: 1457007 (cherry picked from commit a77f2c31a2c6fb8a8abe22f3a562767cf230cbef) Mac route in agent was having invalid label and destination NH. Problem: Two issues were seen - 1) Label was wrongly interpreted as MPLS though it was VXLAN and never got corrected. Sequence of events was that EVPN route was received before global vrouter config. Encapsulation priorities was absent in agent, so evpn route was programmed with Mpls-gre(control-node had sent Vxlan). Since Mpls encap was chosen, label was interpreted as mpls label and not vxlan. When global vrouter config was received resync was done for route. Resync also failed to rectify encap to Vxlan(since vxlan is now available in priorities) because this decision to rectify is based on vxlan id(i.e. if vxlan id is 0 default to mpls as its invalid). In this case vxlan id was 0 as explained above and hence encap continued to be Mpls. 2) Nexthop was different between evpn route and derived mac route. This happened because in path sync of evpn route return was false even though NH change was seen which resulted in avoidance of mac route rebake. Return was false because value set by ChangeNh as true was overridden by MplsChange. Solution: For case 1) - If encap is Vxlan only in the message sent by control-node then put label as vxlan id and mpls label as invalid, even though tunnel type is computed as Mpls(encapsulation prioirties is not received). In case of Mpls encap sent use label as Mpls and reset vxlan to invalid. In case both Vxlan and Mpls are sent in encap then fallback to old approach of interpreting label on computed tunnel type. For case 2) - Fix the return value. Change-Id: Ibeeb3de16d618ecb931c35d8937591d9c9f7f15e Closes-bug: 1457355 (cherry picked from commit f8f09e3f94590333a3f1a38acff47aeb834c1f18) Multicast route not deleted on vrf delete. In Tor agent multicast route does not get deleted when VRF is deleted. This is becasue deletion is triggered from logical-switch delete. Though that is valid, vrf delete should also result in route delete. As mentioned in the bug there can be cases in scaled setup where vrf delete is received and VN delete is delayed. This may result in multicast route dangling till VN is received. Ultimately if Vrf delete timeout gets executed then it will crash. Change-Id: I8db095a6e99dddeb17bea2edbcbe10fab4c58623 Closes-bug: 1458187 (cherry picked from commit e6c1005a12b70c8843f5dca929d169cef5f2e1e0) AgentXmppChannel is invalid in route notification. Problem: Every AgentXmppChannel has a peer created on channel coming up. When channel goes down this peer is deleted and a walk is started to delete states and path for this peer in route and vrf entries. After channel has gone into not-ready state, it may get timedout and ultimately deleted. However walk is still underway and the deleted peer of this channel is not yet unregistered from vrf and route tables. Either this walk notification or any update on db-entry in these tables will send deleted channel in argument. This will result in crash. Solution: On every notification there is a check to find out if its for active peer or deleted one(IsBgpPeerActive). In the same routine check if channel passed is one of the channels being used by agent or not. If its not then conside peer as inactive. Change-Id: I8cdb01d9e5a6c83e6f9f7b6785288d9ea5c973d2 Closes-bug: 1458529 (cherry picked from commit 9056ce9349ba7f46fb0d7a8f88af68a9e537944b) On deletion of AgentXmppchannel, BGP peer was not cleared properly. Ideally every channel state change is responsible for cleaning up or creating BGP peer. However with commit https://github.com/Juniper/contrail-controller/commit/6d845c15ca2bd114ded81d4092aa134b929ec39e above assumption will not be true. To fix it artificially inject a channel down event on deletion of agent xmpp channel. Since BGP peer is being manipulated also push the process of applying discovery servers to controller work queue. Change-Id: Ia733ed1061747153fbfb841f63813ad148ce6bfc Closes-bug: 1460435 (cherry picked from commit 45df6471a7948f1bc4dadab830cc6d57b42e3859) --- src/vnsw/agent/cmn/agent.h | 2 +- .../agent/controller/controller_export.cc | 9 +- src/vnsw/agent/controller/controller_export.h | 4 +- src/vnsw/agent/controller/controller_ifmap.cc | 6 +- src/vnsw/agent/controller/controller_init.cc | 38 +++++++- src/vnsw/agent/controller/controller_init.h | 16 +++- src/vnsw/agent/controller/controller_peer.cc | 53 ++++++++--- src/vnsw/agent/controller/controller_peer.h | 2 +- .../agent/controller/controller_route_path.cc | 34 ++++--- .../controller/controller_route_walker.cc | 7 +- .../agent/controller/controller_vrf_export.cc | 6 +- .../agent/controller/controller_vrf_export.h | 2 +- src/vnsw/agent/oper/agent_path.cc | 4 +- src/vnsw/agent/oper/agent_route_walker.cc | 42 ++++++--- src/vnsw/agent/oper/agent_route_walker.h | 16 +++- src/vnsw/agent/oper/bridge_route.cc | 30 +++---- src/vnsw/agent/oper/multicast.cc | 10 +-- src/vnsw/agent/oper/peer.cc | 2 +- .../ovsdb_client/multicast_mac_local_ovsdb.cc | 59 +++++++++++-- .../ovsdb_client/multicast_mac_local_ovsdb.h | 13 +++ .../test/test_ovs_multicast_local.cc | 88 +++++++++++++++++++ .../ovs_tor_agent/ovsdb_client/vn_ovsdb.cc | 6 ++ .../agent/test/test_agent_route_walker.cc | 2 + src/vnsw/agent/test/test_l2route.cc | 35 ++++++++ src/vnsw/agent/test/test_route.cc | 37 ++++++++ src/vnsw/agent/test/test_util.cc | 19 ++-- 26 files changed, 449 insertions(+), 93 deletions(-) diff --git a/src/vnsw/agent/cmn/agent.h b/src/vnsw/agent/cmn/agent.h index 7b52fbe398b..5b21ec4148f 100644 --- a/src/vnsw/agent/cmn/agent.h +++ b/src/vnsw/agent/cmn/agent.h @@ -460,7 +460,7 @@ class Agent { xs_stime_[idx] = time; } - AgentXmppChannel *controller_xmpp_channel(uint8_t idx) { + AgentXmppChannel *controller_xmpp_channel(uint8_t idx) const { return agent_xmpp_channel_[idx]; } diff --git a/src/vnsw/agent/controller/controller_export.cc b/src/vnsw/agent/controller/controller_export.cc index ed70ff95235..096ec78c518 100644 --- a/src/vnsw/agent/controller/controller_export.cc +++ b/src/vnsw/agent/controller/controller_export.cc @@ -75,7 +75,8 @@ void RouteExport::ManagedDelete() { } // Route entry add/change/del notification handler -void RouteExport::Notify(AgentXmppChannel *bgp_xmpp_peer, +void RouteExport::Notify(const Agent *agent, + AgentXmppChannel *bgp_xmpp_peer, bool associate, Agent::RouteTableType type, DBTablePartBase *partition, DBEntryBase *e) { AgentRoute *route = static_cast(e); @@ -84,7 +85,7 @@ void RouteExport::Notify(AgentXmppChannel *bgp_xmpp_peer, if (!route->IsDeleted()) { // If there is no active BGP peer attached to channel, ignore // non-delete notification for this channel - if (!AgentXmppChannel::IsBgpPeerActive(bgp_xmpp_peer)) + if (!AgentXmppChannel::IsBgpPeerActive(agent, bgp_xmpp_peer)) return; // Extract the listener ID of active BGP peer for route table to which @@ -398,8 +399,8 @@ RouteExport* RouteExport::Init(AgentRouteTable *table, RouteExport *rt_export = new RouteExport(table); bool associate = true; rt_export->id_ = table->Register(boost::bind(&RouteExport::Notify, - rt_export, bgp_xmpp_peer, associate, - table->GetTableType(), _1, _2)); + rt_export, table->agent(), bgp_xmpp_peer, + associate, table->GetTableType(), _1, _2)); return rt_export; } diff --git a/src/vnsw/agent/controller/controller_export.h b/src/vnsw/agent/controller/controller_export.h index 46b8fece393..a3ac0b8efab 100644 --- a/src/vnsw/agent/controller/controller_export.h +++ b/src/vnsw/agent/controller/controller_export.h @@ -34,8 +34,8 @@ class RouteExport { RouteExport(AgentRouteTable *rt); ~RouteExport(); - void Notify(AgentXmppChannel *bgp_xmpp_peer, bool associate, - Agent::RouteTableType type, + void Notify(const Agent *agent, AgentXmppChannel *bgp_xmpp_peer, + bool associate, Agent::RouteTableType type, DBTablePartBase *partition, DBEntryBase *e); void ManagedDelete(); DBTableBase::ListenerId GetListenerId() const {return id_;}; diff --git a/src/vnsw/agent/controller/controller_ifmap.cc b/src/vnsw/agent/controller/controller_ifmap.cc index b1fab6f848d..08e33cf3076 100644 --- a/src/vnsw/agent/controller/controller_ifmap.cc +++ b/src/vnsw/agent/controller/controller_ifmap.cc @@ -132,7 +132,7 @@ void AgentIfMapVmExport::Notify(DBTablePartBase *partition, DBEntryBase *e) { peer = agent_->controller_xmpp_channel(agent_->ifmap_active_xmpp_server_index()); ifmap = agent_->ifmap_xmpp_channel(agent_-> ifmap_active_xmpp_server_index()); - if (AgentXmppChannel::IsBgpPeerActive(peer) && ifmap) { + if (AgentXmppChannel::IsBgpPeerActive(agent_, peer) && ifmap) { if ((info->seq_number_ == ifmap->GetSeqNumber()) && (info->vmi_list_.size() == 1)) { CONTROLLER_TRACE(IFMapVmExportTrace, vmid.str(), "", @@ -178,7 +178,7 @@ void AgentIfMapVmExport::Notify(DBTablePartBase *partition, DBEntryBase *e) { //Ensure that peer exists and is in active state peer = agent_->controller_xmpp_channel(agent_->ifmap_active_xmpp_server_index()); - if (!AgentXmppChannel::IsBgpPeerActive(peer)) { + if (!AgentXmppChannel::IsBgpPeerActive(agent_, peer)) { return; } @@ -209,7 +209,7 @@ void AgentIfMapVmExport::NotifyAll(AgentXmppChannel *peer) { struct VmExportInfo *info = NULL; Agent *agent = peer->agent(); - if (!AgentXmppChannel::IsBgpPeerActive(peer)) { + if (!AgentXmppChannel::IsBgpPeerActive(agent, peer)) { return; } diff --git a/src/vnsw/agent/controller/controller_init.cc b/src/vnsw/agent/controller/controller_init.cc index 371324ebabf..6c402dbf4c3 100644 --- a/src/vnsw/agent/controller/controller_init.cc +++ b/src/vnsw/agent/controller/controller_init.cc @@ -30,6 +30,10 @@ using namespace boost::asio; SandeshTraceBufferPtr ControllerTraceBuf(SandeshTraceBufferCreate( "Controller", 1000)); +ControllerDiscoveryData::ControllerDiscoveryData(std::vector resp) : + ControllerWorkQueueData(), discovery_response_(resp) { +} + VNController::VNController(Agent *agent) : agent_(agent), multicast_sequence_number_(0), unicast_cleanup_timer_(agent), multicast_cleanup_timer_(agent), @@ -230,6 +234,22 @@ void VNController::DnsXmppServerDisConnect() { } +//During delete of xmpp channel, check if BGP peer is deleted. +//If not agent never got a channel down state and is being removed +//as it is not part of discovery list. +//Artificially inject NOT_READY in agent xmpp channel. +void VNController::DeleteAgentXmppChannel(AgentXmppChannel *channel) { + if (!channel) + return; + + BgpPeer *bgp_peer = channel->bgp_peer_id(); + if (bgp_peer != NULL) { + AgentXmppChannel::HandleAgentXmppClientChannelEvent(channel, + xmps::NOT_READY); + } + delete channel; +} + //Trigger shutdown and cleanup of routes for the client void VNController::DisConnect() { XmppServerDisConnect(); @@ -303,11 +323,11 @@ void VNController::DisConnectControllerIfmapServer(uint8_t idx) { //cleanup AgentXmppChannel agent_->ResetAgentMcastLabelRange(idx); - delete agent_->controller_xmpp_channel(idx); + DeleteAgentXmppChannel(agent_->controller_xmpp_channel(idx)); agent_->set_controller_xmpp_channel(NULL, idx); //cleanup AgentIfmapXmppChannel - delete agent_->ifmap_xmpp_channel(idx); + delete agent_->ifmap_xmpp_channel(idx); agent_->set_ifmap_xmpp_channel(NULL, idx); agent_->controller_ifmap_xmpp_init(idx)->Reset(); @@ -331,7 +351,13 @@ bool VNController::AgentXmppServerExists(const std::string &server_ip, } void VNController::ApplyDiscoveryXmppServices(std::vector resp) { + ControllerDiscoveryDataType data(new ControllerDiscoveryData(resp)); + ControllerWorkQueueDataType base_data = + boost::static_pointer_cast(data); + work_queue_.Enqueue(base_data); +} +bool VNController::ApplyDiscoveryXmppServicesInternal(std::vector resp) { std::vector::iterator iter; int8_t count = -1; for (iter = resp.begin(); iter != resp.end(); iter++) { @@ -415,6 +441,7 @@ void VNController::ApplyDiscoveryXmppServices(std::vector resp) { } XmppServerConnect(); + return true; } AgentDnsXmppChannel *VNController::FindAgentDnsXmppChannel( @@ -697,6 +724,13 @@ bool VNController::ControllerWorkQueueProcess(ControllerWorkQueueDataType data) return ControllerPeerHeadlessAgentDelDone(derived_walk_done_data-> bgp_peer()); } + //Discovery response for servers + ControllerDiscoveryDataType discovery_data = + boost::dynamic_pointer_cast(data); + if (discovery_data) { + return ApplyDiscoveryXmppServicesInternal(discovery_data-> + discovery_response_); + } return true; } diff --git a/src/vnsw/agent/controller/controller_init.h b/src/vnsw/agent/controller/controller_init.h index 906b7b08f4b..e102f9c93c5 100644 --- a/src/vnsw/agent/controller/controller_init.h +++ b/src/vnsw/agent/controller/controller_init.h @@ -66,11 +66,21 @@ class ControllerXmppData : public ControllerWorkQueueData { DISALLOW_COPY_AND_ASSIGN(ControllerXmppData); }; +class ControllerDiscoveryData : public ControllerWorkQueueData { +public: + ControllerDiscoveryData(std::vector resp); + virtual ~ControllerDiscoveryData() {} + + std::vector discovery_response_; + DISALLOW_COPY_AND_ASSIGN(ControllerDiscoveryData); +}; + class VNController { public: typedef boost::shared_ptr ControllerXmppDataType; typedef boost::shared_ptr ControllerDeletePeerDataType; typedef boost::shared_ptr ControllerWorkQueueDataType; + typedef boost::shared_ptr ControllerDiscoveryDataType; typedef boost::shared_ptr BgpPeerPtr; typedef std::list >::iterator BgpPeerIterator; VNController(Agent *agent); @@ -86,8 +96,8 @@ class VNController { void XmppServerDisConnect(); void DnsXmppServerDisConnect(); - void ApplyDiscoveryXmppServices(std::vector resp); - void ApplyDiscoveryDnsXmppServices(std::vector resp); + void ApplyDiscoveryXmppServices(std::vector resp); + void ApplyDiscoveryDnsXmppServices(std::vector resp); void DisConnectControllerIfmapServer(uint8_t idx); void DisConnectDnsServer(uint8_t idx); @@ -129,6 +139,7 @@ class VNController { bool XmppMessageProcess(ControllerXmppDataType data); Agent *agent() {return agent_;} void Enqueue(ControllerWorkQueueDataType data); + void DeleteAgentXmppChannel(AgentXmppChannel *ch); private: AgentXmppChannel *FindAgentXmppChannel(const std::string &server_ip); @@ -137,6 +148,7 @@ class VNController { const std::string MakeConnectionPrefix(bool is_dns) const; bool AgentXmppServerExists(const std::string &server_ip, std::vector resp); + bool ApplyDiscoveryXmppServicesInternal(std::vector resp); Agent *agent_; uint64_t multicast_sequence_number_; diff --git a/src/vnsw/agent/controller/controller_peer.cc b/src/vnsw/agent/controller/controller_peer.cc index 859aa4f00a2..2b26097206d 100644 --- a/src/vnsw/agent/controller/controller_peer.cc +++ b/src/vnsw/agent/controller/controller_peer.cc @@ -81,6 +81,8 @@ AgentXmppChannel::AgentXmppChannel(Agent *agent, } AgentXmppChannel::~AgentXmppChannel() { + BgpPeer *bgp_peer = bgp_peer_id_.get(); + assert(bgp_peer == NULL); channel_->UnRegisterReceive(xmps::BGP); } @@ -105,7 +107,7 @@ void AgentXmppChannel::CreateBgpPeer() { assert(bgp_peer_id_.get() == NULL); DBTableBase::ListenerId id = agent_->vrf_table()->Register(boost::bind(&VrfExport::Notify, - this, _1, _2)); + agent_, this, _1, _2)); boost::system::error_code ec; const string &addr = agent_->controller_ifmap_xmpp_server(xs_idx_); Ip4Address ip = Ip4Address::from_string(addr.c_str(), ec); @@ -1238,7 +1240,24 @@ void AgentXmppChannel::MulticastPeerDown(AgentXmppChannel *old_mcast_builder, * 2) xmpp channel is in READY state * 3) Valid XMPP channel */ -bool AgentXmppChannel::IsBgpPeerActive(AgentXmppChannel *peer) { +bool AgentXmppChannel::IsBgpPeerActive(const Agent *agent, + AgentXmppChannel *peer) { + bool xmpp_channel_not_found = true; + //Verify if channel registered is stiil active or has been deleted + //after bgp peer was down. This is checked under existing agent + //xmpp channels in agent. + for (uint8_t idx = 0; idx < MAX_XMPP_SERVERS; idx++) { + if (agent->controller_xmpp_channel(idx) == peer) { + xmpp_channel_not_found = false; + break; + } + } + if (xmpp_channel_not_found) + return false; + + //Reach here if channel is present. Now check for BGP peer + //as channel may have come up and created another BGP peer. + //Also check for the state of channel. if (peer && peer->GetXmppChannel() && peer->bgp_peer_id() && (peer->GetXmppChannel()->GetPeerState() == xmps::READY)) { return true; @@ -1456,7 +1475,7 @@ void AgentXmppChannel::HandleAgentXmppClientChannelEvent(AgentXmppChannel *peer, agent->reset_ifmap_active_xmpp_server(); AgentXmppChannel *new_cfg_peer = agent->controller_xmpp_channel(idx); - if (IsBgpPeerActive(new_cfg_peer) && + if (IsBgpPeerActive(agent, new_cfg_peer) && AgentXmppChannel::SetConfigPeer(new_cfg_peer)) { AgentXmppChannel::CleanConfigStale(new_cfg_peer); CONTROLLER_TRACE(Session, new_cfg_peer->GetXmppServer(), @@ -1493,7 +1512,7 @@ void AgentXmppChannel::HandleAgentXmppClientChannelEvent(AgentXmppChannel *peer, // 2) Channel is in READY state // 3) BGP peer is commissioned for channel bool evaluate_new_mcast_builder = - IsBgpPeerActive(new_mcast_builder); + IsBgpPeerActive(agent, new_mcast_builder); if (!evaluate_new_mcast_builder) { new_mcast_builder = NULL; @@ -1567,7 +1586,10 @@ bool AgentXmppChannel::ControllerSendVmCfgSubscribe(AgentXmppChannel *peer, CONTROLLER_TRACE(Trace, peer->GetBgpPeerName(), "", std::string(reinterpret_cast(data_), datalen_)); // send data - return (peer->SendUpdate(data_,datalen_)); + if (peer->SendUpdate(data_,datalen_) == false) { + CONTROLLER_TRACE(Session, peer->GetXmppServer(), + "VM subscribe Send Update deferred", vm, ""); + } return true; } @@ -1604,7 +1626,11 @@ bool AgentXmppChannel::ControllerSendCfgSubscribe(AgentXmppChannel *peer) { CONTROLLER_TRACE(Trace, peer->GetBgpPeerName(), "", std::string(reinterpret_cast(data_), datalen_)); // send data - return (peer->SendUpdate(data_,datalen_)); + if (peer->SendUpdate(data_,datalen_) == false) { + CONTROLLER_TRACE(Session, peer->GetXmppServer(), + "Config subscribe Send Update deferred", node, ""); + } + return true; } bool AgentXmppChannel::ControllerSendSubscribe(AgentXmppChannel *peer, @@ -1650,7 +1676,11 @@ bool AgentXmppChannel::ControllerSendSubscribe(AgentXmppChannel *peer, datalen_ = XmppProto::EncodeMessage(impl.get(), data_, sizeof(data_)); // send data - return (peer->SendUpdate(data_,datalen_)); + if (peer->SendUpdate(data_,datalen_) == false) { + CONTROLLER_TRACE(Session, peer->GetXmppServer(), + "Vrf subscribe Send Update deferred", vrf_id.str(), ""); + } + return true; } bool AgentXmppChannel::ControllerSendV4V6UnicastRouteCommon(AgentRoute *route, @@ -1762,7 +1792,8 @@ bool AgentXmppChannel::ControllerSendV4V6UnicastRouteCommon(AgentRoute *route, datalen_ = XmppProto::EncodeMessage(impl.get(), data_, sizeof(data_)); // send data - return (SendUpdate(data_,datalen_)); + SendUpdate(data_,datalen_); + return true; } bool AgentXmppChannel::BuildTorMulticastMessage(EnetItemType &item, @@ -2070,7 +2101,8 @@ bool AgentXmppChannel::BuildAndSendEvpnDom(EnetItemType &item, datalen_ = XmppProto::EncodeMessage(impl.get(), data_, sizeof(data_)); // send data - return (SendUpdate(data_,datalen_)); + SendUpdate(data_,datalen_); + return true; } bool AgentXmppChannel::ControllerSendEvpnRouteCommon(AgentRoute *route, @@ -2218,7 +2250,8 @@ bool AgentXmppChannel::ControllerSendMcastRouteCommon(AgentRoute *route, datalen_ = XmppProto::EncodeMessage(impl.get(), data_, sizeof(data_)); // send data - return (SendUpdate(data_,datalen_)); + SendUpdate(data_,datalen_); + return true; } bool AgentXmppChannel::ControllerSendEvpnRouteAdd(AgentXmppChannel *peer, diff --git a/src/vnsw/agent/controller/controller_peer.h b/src/vnsw/agent/controller/controller_peer.h index acf8629f9d7..1aec43aced4 100644 --- a/src/vnsw/agent/controller/controller_peer.h +++ b/src/vnsw/agent/controller/controller_peer.h @@ -42,7 +42,7 @@ class AgentXmppChannel { void ReceiveBgpMessage(std::auto_ptr impl); //Helper to identify if specified peer has active BGP peer attached - static bool IsBgpPeerActive(AgentXmppChannel *peer); + static bool IsBgpPeerActive(const Agent *agent, AgentXmppChannel *peer); static bool SetConfigPeer(AgentXmppChannel *peer); static void SetMulticastPeer(AgentXmppChannel *old_peer, AgentXmppChannel *new_peer); diff --git a/src/vnsw/agent/controller/controller_route_path.cc b/src/vnsw/agent/controller/controller_route_path.cc index 779ff69b715..d34a46eee63 100644 --- a/src/vnsw/agent/controller/controller_route_path.cc +++ b/src/vnsw/agent/controller/controller_route_path.cc @@ -169,17 +169,31 @@ bool ControllerVmRoute::AddChangePath(Agent *agent, AgentPath *path, ret = true; } - if (new_tunnel_type == TunnelType::VXLAN) { - if (path->vxlan_id() != label_) { - path->set_vxlan_id(label_); - path->set_label(MplsTable::kInvalidLabel); - ret = true; - } + //Interpret label sent by control node + if (tunnel_bmap_ == TunnelType::VxlanType()) { + //Only VXLAN encap is sent, so label is VXLAN + path->set_vxlan_id(label_); + path->set_label(MplsTable::kInvalidLabel); + } else if (tunnel_bmap_ == TunnelType::MplsType()) { + //MPLS (GRE/UDP) is the only encap sent, + //so label is MPLS. + path->set_label(label_); + path->set_vxlan_id(VxLanTable::kInvalidvxlan_id); } else { - if (path->label() != label_) { - path->set_label(label_); - path->set_vxlan_id(VxLanTable::kInvalidvxlan_id); - ret = true; + //Got a mix of Vxlan and Mpls, so interpret label + //as per the computed tunnel type. + if (new_tunnel_type == TunnelType::VXLAN) { + if (path->vxlan_id() != label_) { + path->set_vxlan_id(label_); + path->set_label(MplsTable::kInvalidLabel); + ret = true; + } + } else { + if (path->label() != label_) { + path->set_label(label_); + path->set_vxlan_id(VxLanTable::kInvalidvxlan_id); + ret = true; + } } } diff --git a/src/vnsw/agent/controller/controller_route_walker.cc b/src/vnsw/agent/controller/controller_route_walker.cc index 23a017ef758..a13923501ab 100644 --- a/src/vnsw/agent/controller/controller_route_walker.cc +++ b/src/vnsw/agent/controller/controller_route_walker.cc @@ -76,7 +76,8 @@ bool ControllerRouteWalker::VrfNotifyAll(DBTablePartBase *partition, //Pass this object pointer so that VrfExport::Notify can start the route //walk if required on this VRF. Also it adds state if none is found. - VrfExport::Notify(bgp_peer->GetBgpXmppPeer(), partition, entry); + VrfExport::Notify(agent(), bgp_peer->GetBgpXmppPeer(), + partition, entry); CONTROLLER_TRACE(Walker, "Vrf Notify all", vrf->GetName(), bgp_peer->GetName()); return true; @@ -183,8 +184,8 @@ bool ControllerRouteWalker::RouteNotifyInternal(DBTablePartBase *partition, (bgp_peer->GetVrfExportState(vrf_partition, vrf)); vs->rt_export_[table_type]-> - Notify(bgp_peer->GetBgpXmppPeer(), associate_, table_type, partition, - entry); + Notify(agent(), bgp_peer->GetBgpXmppPeer(), associate_, table_type, + partition, entry); return true; } diff --git a/src/vnsw/agent/controller/controller_vrf_export.cc b/src/vnsw/agent/controller/controller_vrf_export.cc index 00969652fb7..0c247d413d1 100644 --- a/src/vnsw/agent/controller/controller_vrf_export.cc +++ b/src/vnsw/agent/controller/controller_vrf_export.cc @@ -24,7 +24,7 @@ VrfExport::State::State() : DBState(), exported_(false), VrfExport::State::~State() { }; -void VrfExport::Notify(AgentXmppChannel *bgp_xmpp_peer, +void VrfExport::Notify(const Agent *agent, AgentXmppChannel *bgp_xmpp_peer, DBTablePartBase *partition, DBEntryBase *e) { BgpPeer *bgp_peer = static_cast(bgp_xmpp_peer->bgp_peer_id()); @@ -33,7 +33,7 @@ void VrfExport::Notify(AgentXmppChannel *bgp_xmpp_peer, //Peer is decommissioned so ignore the notification as there is no active //listener. Deletion of state for decommisioned peer will happen via delpeer //walk. - if (!AgentXmppChannel::IsBgpPeerActive(bgp_xmpp_peer) + if (!AgentXmppChannel::IsBgpPeerActive(agent, bgp_xmpp_peer) && !(vrf->IsDeleted())) { return; } @@ -49,7 +49,7 @@ void VrfExport::Notify(AgentXmppChannel *bgp_xmpp_peer, return; } - if (!AgentXmppChannel::IsBgpPeerActive(bgp_xmpp_peer)) + if (!AgentXmppChannel::IsBgpPeerActive(agent, bgp_xmpp_peer)) return; DBTableBase::ListenerId id = bgp_peer->GetVrfExportListenerId(); diff --git a/src/vnsw/agent/controller/controller_vrf_export.h b/src/vnsw/agent/controller/controller_vrf_export.h index 867a854ebef..a80c7ae34c4 100644 --- a/src/vnsw/agent/controller/controller_vrf_export.h +++ b/src/vnsw/agent/controller/controller_vrf_export.h @@ -22,7 +22,7 @@ class VrfExport { RouteExport *rt_export_[Agent::ROUTE_TABLE_MAX]; }; - static void Notify(AgentXmppChannel *, + static void Notify(const Agent *agent, AgentXmppChannel *, DBTablePartBase *partition, DBEntryBase *e); }; diff --git a/src/vnsw/agent/oper/agent_path.cc b/src/vnsw/agent/oper/agent_path.cc index c1b720034d3..74fe5b0f9c2 100644 --- a/src/vnsw/agent/oper/agent_path.cc +++ b/src/vnsw/agent/oper/agent_path.cc @@ -100,7 +100,8 @@ bool AgentPath::ChangeNH(Agent *agent, NextHop *nh) { MplsLabelKey key(MplsLabel::MCAST_NH, label_); MplsLabel *mpls = static_cast(agent->mpls_table()-> FindActiveEntry(&key)); - ret = agent->mpls_table()->ChangeNH(mpls, nh); + if (agent->mpls_table()->ChangeNH(mpls, nh)) + ret = true; if (mpls) { //Send notify of change mpls->get_table_partition()->Notify(mpls); @@ -360,6 +361,7 @@ bool EvpnDerivedPathData::AddChangePath(Agent *agent, AgentPath *path, EvpnDerivedPath *evpn_path = dynamic_cast(path); assert(evpn_path != NULL); + evpn_path->set_server_ip(reference_path_->server_ip()); uint32_t label = reference_path_->label(); if (evpn_path->label() != label) { evpn_path->set_label(label); diff --git a/src/vnsw/agent/oper/agent_route_walker.cc b/src/vnsw/agent/oper/agent_route_walker.cc index aab3996b176..4dd4b6ab313 100644 --- a/src/vnsw/agent/oper/agent_route_walker.cc +++ b/src/vnsw/agent/oper/agent_route_walker.cc @@ -24,6 +24,8 @@ AgentRouteWalker::AgentRouteWalker(Agent *agent, WalkType type) : GetTaskId("Agent::RouteWalker"), 0, boost::bind(&AgentRouteWalker::RouteWalker, this, _1)) { walk_count_ = AgentRouteWalker::kInvalidWalkCount; + queued_walk_count_ = AgentRouteWalker::kInvalidWalkCount; + queued_walk_done_count_ = AgentRouteWalker::kInvalidWalkCount; for (uint8_t table_type = (Agent::INVALID + 1); table_type < Agent::ROUTE_TABLE_MAX; table_type++) { @@ -39,18 +41,21 @@ bool AgentRouteWalker::RouteWalker(boost::shared_ptr VrfEntry *vrf = data->vrf_ref_.get(); switch (data->type_) { case AgentRouteWalkerQueueEntry::START_VRF_WALK: + DecrementQueuedWalkCount(); StartVrfWalkInternal(); break; case AgentRouteWalkerQueueEntry::CANCEL_VRF_WALK: CancelVrfWalkInternal(); break; case AgentRouteWalkerQueueEntry::START_ROUTE_WALK: + DecrementQueuedWalkCount(); StartRouteWalkInternal(vrf); break; case AgentRouteWalkerQueueEntry::CANCEL_ROUTE_WALK: CancelRouteWalkInternal(vrf); break; case AgentRouteWalkerQueueEntry::DONE_WALK: + DecrementQueuedWalkDoneCount(); CallbackInternal(vrf, data->all_walks_done_); break; default: @@ -122,7 +127,7 @@ void AgentRouteWalker::StartVrfWalk() { boost::shared_ptr data(new AgentRouteWalkerQueueEntry(NULL, AgentRouteWalkerQueueEntry::START_VRF_WALK, false)); - IncrementWalkCount(); + IncrementQueuedWalkCount(); work_queue_.Enqueue(data); } @@ -140,14 +145,11 @@ void AgentRouteWalker::StartVrfWalkInternal() boost::bind(&AgentRouteWalker::VrfWalkDone, this, _1)); if (vrf_walkid_ != DBTableWalker::kInvalidWalkerId) { + IncrementWalkCount(); AGENT_DBWALK_TRACE(AgentRouteWalkerTrace, "VRF table walk started", walk_type_, "", vrf_walkid_, 0, "", DBTableWalker::kInvalidWalkerId); - } else { - //As walk didnt start because of some failure decrement the - //walk count, as there will be no walk done. - DecrementWalkCount(); } } @@ -159,7 +161,7 @@ void AgentRouteWalker::StartRouteWalk(VrfEntry *vrf) { boost::shared_ptr data(new AgentRouteWalkerQueueEntry(vrf, AgentRouteWalkerQueueEntry::START_ROUTE_WALK, false)); - IncrementWalkCount(); + IncrementQueuedWalkCount(); work_queue_.Enqueue(data); } @@ -185,14 +187,11 @@ void AgentRouteWalker::StartRouteWalkInternal(const VrfEntry *vrf) { this, _1)); if (walkid != DBTableWalker::kInvalidWalkerId) { route_walkid_[table_type][vrf_id] = walkid; + IncrementWalkCount(); AGENT_DBWALK_TRACE(AgentRouteWalkerTrace, "Route table walk started for vrf", walk_type_, (vrf != NULL) ? vrf->GetName() : "Unknown", vrf_walkid_, table_type, "", walkid); - } else { - //As walk didnt start because of some failure decrement the - //walk count, as there will be no walk done. - DecrementWalkCount(); } } } @@ -284,7 +283,19 @@ void AgentRouteWalker::RouteWalkDone(DBTableBase *part) { void AgentRouteWalker::DecrementWalkCount() { if (walk_count_ != AgentRouteWalker::kInvalidWalkCount) { - walk_count_--; + walk_count_.fetch_and_decrement(); + } +} + +void AgentRouteWalker::DecrementQueuedWalkCount() { + if (queued_walk_count_ != AgentRouteWalker::kInvalidWalkCount) { + queued_walk_count_.fetch_and_decrement(); + } +} + +void AgentRouteWalker::DecrementQueuedWalkDoneCount() { + if (queued_walk_done_count_ != AgentRouteWalker::kInvalidWalkCount) { + queued_walk_done_count_.fetch_and_decrement(); } } @@ -293,6 +304,7 @@ void AgentRouteWalker::Callback(VrfEntry *vrf) { (new AgentRouteWalkerQueueEntry(vrf, AgentRouteWalkerQueueEntry::DONE_WALK, AreAllWalksDone())); + IncrementQueuedWalkDoneCount(); work_queue_.Enqueue(data); } @@ -341,7 +353,8 @@ bool AgentRouteWalker::AreAllWalksDone() const { } } } - if (walk_done && walk_count_) { + if (walk_done && (walk_count_ != AgentRouteWalker::kInvalidWalkCount) + && (queued_walk_count_ != AgentRouteWalker::kInvalidWalkCount)) { walk_done = false; } return walk_done; @@ -351,7 +364,10 @@ bool AgentRouteWalker::AreAllWalksDone() const { * Check if all walks are over. */ void AgentRouteWalker::OnWalkComplete() { - if (!walk_count_ && walk_done_cb_) { + if ((walk_count_ == AgentRouteWalker::kInvalidWalkCount) && + (queued_walk_count_ == AgentRouteWalker::kInvalidWalkCount) && + (queued_walk_done_count_ == AgentRouteWalker::kInvalidWalkCount) && + walk_done_cb_) { walk_done_cb_(); } } diff --git a/src/vnsw/agent/oper/agent_route_walker.h b/src/vnsw/agent/oper/agent_route_walker.h index 45a378d4c7b..189f2b5c1f9 100644 --- a/src/vnsw/agent/oper/agent_route_walker.h +++ b/src/vnsw/agent/oper/agent_route_walker.h @@ -88,8 +88,14 @@ class AgentRouteWalker { void WalkDoneCallback(WalkDone cb); void RouteWalkDoneForVrfCallback(RouteWalkDoneCb cb); + int queued_walk_done_count() const {return queued_walk_done_count_;} + bool AllWalkDoneDequeued() const {return (queued_walk_done_count_ == + kInvalidWalkCount);} + int queued_walk_count() const {return queued_walk_count_;} + bool AllWalksDequeued() const {return (queued_walk_count_ == + kInvalidWalkCount);} int walk_count() const {return walk_count_;} - bool IsWalkCompleted() const {return (walk_count_ == 0);} + bool IsWalkCompleted() const {return (walk_count_ == kInvalidWalkCount);} //Callback for start of a walk issued from Agent::RouteWalker //task context. virtual bool RouteWalker(boost::shared_ptr data); @@ -107,10 +113,16 @@ class AgentRouteWalker { void OnWalkComplete(); void OnRouteTableWalkCompleteForVrf(VrfEntry *vrf); void DecrementWalkCount(); - void IncrementWalkCount() {walk_count_++;} + void IncrementWalkCount() {walk_count_.fetch_and_increment();} + void IncrementQueuedWalkCount() {queued_walk_count_.fetch_and_increment();} + void DecrementQueuedWalkCount(); + void IncrementQueuedWalkDoneCount() {queued_walk_done_count_.fetch_and_increment();} + void DecrementQueuedWalkDoneCount(); Agent *agent_; AgentRouteWalker::WalkType walk_type_; + tbb::atomic queued_walk_count_; + tbb::atomic queued_walk_done_count_; tbb::atomic walk_count_; DBTableWalker::WalkId vrf_walkid_; VrfRouteWalkerIdMap route_walkid_[Agent::ROUTE_TABLE_MAX]; diff --git a/src/vnsw/agent/oper/bridge_route.cc b/src/vnsw/agent/oper/bridge_route.cc index 710699fc16e..83d25bb64fa 100644 --- a/src/vnsw/agent/oper/bridge_route.cc +++ b/src/vnsw/agent/oper/bridge_route.cc @@ -510,7 +510,7 @@ bool BridgeRouteEntry::ReComputeMulticastPaths(AgentPath *path, bool del) { //Fabric path - from multicast builder //Multicast peer AgentPath *multicast_peer_path = NULL; - AgentPath *local_peer_path = NULL; + AgentPath *local_vm_peer_path = NULL; AgentPath *evpn_peer_path = NULL; AgentPath *fabric_peer_path = NULL; AgentPath *tor_peer_path = NULL; @@ -545,7 +545,7 @@ bool BridgeRouteEntry::ReComputeMulticastPaths(AgentPath *path, bool del) { //Handle Add/Changes if (it_path->peer() == agent->local_vm_peer()) { - local_peer_path = it_path; + local_vm_peer_path = it_path; } else if (it_path->peer()->GetType() == Peer::BGP_PEER) { const CompositeNH *bgp_comp_nh = static_cast(it_path->nexthop()); @@ -570,7 +570,7 @@ bool BridgeRouteEntry::ReComputeMulticastPaths(AgentPath *path, bool del) { } //all paths are gone so delete multicast_peer path as well - if ((local_peer_path == NULL) && + if ((local_vm_peer_path == NULL) && (tor_peer_path == NULL) && (evpn_peer_path == NULL) && (fabric_peer_path == NULL)) { @@ -616,13 +616,13 @@ bool BridgeRouteEntry::ReComputeMulticastPaths(AgentPath *path, bool del) { component_nh_list.push_back(component_nh_data3); } - if (local_peer_path) { - NextHopKey *local_peer_key = - static_cast((local_peer_path-> - ComputeNextHop(agent)->GetDBRequestKey()).release()); - std::auto_ptr key1(local_peer_key); - ComponentNHKeyPtr component_nh_data1(new ComponentNHKey(0, key1)); - component_nh_list.push_back(component_nh_data1); + if (local_vm_peer_path) { + NextHopKey *local_vm_peer_key = + static_cast((local_vm_peer_path-> + ComputeNextHop(agent)->GetDBRequestKey()).release()); + std::auto_ptr key4(local_vm_peer_key); + ComponentNHKeyPtr component_nh_data4(new ComponentNHKey(0, key4)); + component_nh_list.push_back(component_nh_data4); } DBRequest nh_req(DBRequest::DB_ENTRY_ADD_CHANGE); @@ -651,12 +651,12 @@ bool BridgeRouteEntry::ReComputeMulticastPaths(AgentPath *path, bool del) { uint32_t tunnel_bmap = TunnelType::AllType(); //Select based on priority of path peer. - if (local_peer_path) { - dest_vn_name = local_peer_path->dest_vn_name(); - unresolved = local_peer_path->unresolved(); - vxlan_id = local_peer_path->vxlan_id(); + if (local_vm_peer_path) { + dest_vn_name = local_vm_peer_path->dest_vn_name(); + unresolved = local_vm_peer_path->unresolved(); + vxlan_id = local_vm_peer_path->vxlan_id(); tunnel_bmap = TunnelType::AllType(); - label = local_peer_path->label(); + label = local_vm_peer_path->label(); evpn_label = label; } else if (tor_peer_path) { dest_vn_name = tor_peer_path->dest_vn_name(); diff --git a/src/vnsw/agent/oper/multicast.cc b/src/vnsw/agent/oper/multicast.cc index 82e0e0cb5a0..8ea1d71f407 100644 --- a/src/vnsw/agent/oper/multicast.cc +++ b/src/vnsw/agent/oper/multicast.cc @@ -126,13 +126,11 @@ void MulticastHandler::HandleTsnSubscription(DBTablePartBase *partition, const VrfEntry *vrf = vn->GetVrf(); uint32_t vn_vxlan_id = vn->GetVxLanId(); bool deleted = false; - bool withdraw = false; MulticastDBState *state = static_cast (vn->GetState(partition->parent(), vn_listener_id_)); deleted = vn->IsDeleted() || !(vrf); - withdraw = (state && (state->vxlan_id_ != vn_vxlan_id)); uint32_t old_vxlan_id = state ? state->vxlan_id_ : 0; boost::system::error_code ec; Ip4Address broadcast = IpAddress::from_string("255.255.255.255", @@ -172,7 +170,7 @@ void MulticastHandler::HandleTsnSubscription(DBTablePartBase *partition, } //Delete or withdraw old vxlan id - if (deleted || withdraw) { + if (deleted) { if (!state) return; @@ -223,8 +221,10 @@ void MulticastHandler::ModifyVN(DBTablePartBase *partition, DBEntryBase *e) HandleIpam(vn); HandleFamilyConfig(vn); - HandleVxLanChange(vn); - HandleTsnSubscription(partition, e); + if (agent_->tsn_enabled() == false) + HandleVxLanChange(vn); + if (agent_->tsn_enabled() == true) + HandleTsnSubscription(partition, e); } bool MulticastGroupObject::CanBeDeleted() const { diff --git a/src/vnsw/agent/oper/peer.cc b/src/vnsw/agent/oper/peer.cc index 57824ca19ff..813ea8735dc 100644 --- a/src/vnsw/agent/oper/peer.cc +++ b/src/vnsw/agent/oper/peer.cc @@ -93,7 +93,7 @@ void BgpPeer::DeleteVrfState(DBTablePartBase *partition, // other new peer. if (bgp_xmpp_peer_ && (bgp_xmpp_peer_->bgp_peer_id() == this) && - AgentXmppChannel::IsBgpPeerActive(bgp_xmpp_peer_)) { + AgentXmppChannel::IsBgpPeerActive(agent(), bgp_xmpp_peer_)) { AgentXmppChannel::ControllerSendSubscribe(bgp_xmpp_peer_, vrf, false); } diff --git a/src/vnsw/agent/ovs_tor_agent/ovsdb_client/multicast_mac_local_ovsdb.cc b/src/vnsw/agent/ovs_tor_agent/ovsdb_client/multicast_mac_local_ovsdb.cc index beffa24889d..9e7d2b22378 100644 --- a/src/vnsw/agent/ovs_tor_agent/ovsdb_client/multicast_mac_local_ovsdb.cc +++ b/src/vnsw/agent/ovs_tor_agent/ovsdb_client/multicast_mac_local_ovsdb.cc @@ -49,6 +49,30 @@ OVSDB::VnOvsdbEntry *MulticastMacLocalEntry::GetVnEntry() const { return vn_entry; } +void MulticastMacLocalEntry::EvaluateVrfDependency(VrfEntry *vrf) { + OVSDB::VnOvsdbEntry *vn_entry = GetVnEntry(); + if ((vn_entry == NULL) || (vn_entry->vrf() == NULL)) { + OnVrfDelete(); + //Issue a ADD_CHANGE_REQ to push entry into defer state. + //Whenever VN entry is back this will be updated. + table_->NotifyEvent(this, KSyncEntry::ADD_CHANGE_REQ); + } +} + +void MulticastMacLocalEntry::OnVrfDelete() { + if (vrf_ == NULL) + return; + + MulticastMacLocalOvsdb *table = static_cast(table_); + OVSDB_TRACE(Trace, "Deleting Multicast Route VN uuid " + logical_switch_name_); + table->peer()->DeleteOvsPeerMulticastRoute(vrf_.get(), vxlan_id_); + table->vrf_dep_list_.erase(MulticastMacLocalOvsdb::VrfDepEntry(vrf_.get(), + this)); + // remove vrf reference after deleting route + vrf_ = NULL; + return; +} + bool MulticastMacLocalEntry::Add() { MulticastMacLocalOvsdb *table = static_cast(table_); OVSDB::VnOvsdbEntry *vn_entry = GetVnEntry(); @@ -56,6 +80,8 @@ bool MulticastMacLocalEntry::Add() { vrf_ = vn_entry->vrf(); OVSDB_TRACE(Trace, "Adding multicast Route VN uuid " + logical_switch_name_); vxlan_id_ = logical_switch_->vxlan_id(); + table->vrf_dep_list_.insert(MulticastMacLocalOvsdb::VrfDepEntry(vrf_.get(), + this)); table->peer()->AddOvsPeerMulticastRoute(vrf_.get(), vxlan_id_, vn_entry->name(), table_->client_idl()->tsn_ip(), @@ -68,11 +94,7 @@ bool MulticastMacLocalEntry::Change() { } bool MulticastMacLocalEntry::Delete() { - MulticastMacLocalOvsdb *table = static_cast(table_); - OVSDB_TRACE(Trace, "Deleting Multicast Route VN uuid " + logical_switch_name_); - table->peer()->DeleteOvsPeerMulticastRoute(vrf_.get(), vxlan_id_); - // remove vrf reference after deleting route - vrf_ = NULL; + OnVrfDelete(); logical_switch_ = NULL; return true; } @@ -100,15 +122,40 @@ const std::string &MulticastMacLocalEntry::logical_switch_name() const { MulticastMacLocalOvsdb::MulticastMacLocalOvsdb(OvsdbClientIdl *idl, OvsPeer *peer) : OvsdbObject(idl), peer_(peer) { -} + vrf_reeval_queue_ = new WorkQueue( + TaskScheduler::GetInstance()->GetTaskId("Agent::KSync"), 0, + boost::bind(&MulticastMacLocalOvsdb::VrfReEval, this, _1)); + } MulticastMacLocalOvsdb::~MulticastMacLocalOvsdb() { + vrf_reeval_queue_->Shutdown(); + delete vrf_reeval_queue_; } OvsPeer *MulticastMacLocalOvsdb::peer() { return peer_; } +void MulticastMacLocalOvsdb::VrfReEvalEnqueue(VrfEntry *vrf) { + vrf_reeval_queue_->Enqueue(vrf); +} + +//Only executed for VRF delete +bool MulticastMacLocalOvsdb::VrfReEval(VrfEntryRef vrf_ref) { + // iterate through dependency list and trigger del and add + VrfDepList::iterator it = + vrf_dep_list_.upper_bound(VrfDepEntry(vrf_ref.get(), NULL)); + while (it != vrf_dep_list_.end()) { + if (it->first != vrf_ref.get()) { + break; + } + MulticastMacLocalEntry *m_entry = it->second; + it++; + m_entry->EvaluateVrfDependency(vrf_ref.get()); + } + return true; +} + KSyncEntry *MulticastMacLocalOvsdb::Alloc(const KSyncEntry *key, uint32_t index) { const MulticastMacLocalEntry *k_entry = static_cast(key); diff --git a/src/vnsw/agent/ovs_tor_agent/ovsdb_client/multicast_mac_local_ovsdb.h b/src/vnsw/agent/ovs_tor_agent/ovsdb_client/multicast_mac_local_ovsdb.h index f4a0a2ce4c3..702689d466f 100644 --- a/src/vnsw/agent/ovs_tor_agent/ovsdb_client/multicast_mac_local_ovsdb.h +++ b/src/vnsw/agent/ovs_tor_agent/ovsdb_client/multicast_mac_local_ovsdb.h @@ -12,8 +12,13 @@ class OvsPeer; namespace OVSDB { class VnOvsdbEntry; +class LogicalSwitchEntry; +class MulticastMacLocalEntry; class MulticastMacLocalOvsdb : public OvsdbObject { public: + typedef std::pair VrfDepEntry; + typedef std::set VrfDepList; + MulticastMacLocalOvsdb(OvsdbClientIdl *idl, OvsPeer *peer); ~MulticastMacLocalOvsdb(); @@ -21,8 +26,13 @@ class MulticastMacLocalOvsdb : public OvsdbObject { void Notify(OvsdbClientIdl::Op op, struct ovsdb_idl_row *row); KSyncEntry *Alloc(const KSyncEntry *key, uint32_t index); + void VrfReEvalEnqueue(VrfEntry *vrf); + bool VrfReEval(VrfEntryRef vrf); private: + friend class MulticastMacLocalEntry; OvsPeer *peer_; + VrfDepList vrf_dep_list_; + WorkQueue *vrf_reeval_queue_; DISALLOW_COPY_AND_ASSIGN(MulticastMacLocalOvsdb); }; @@ -47,6 +57,9 @@ class MulticastMacLocalEntry : public OvsdbEntry { OVSDB::VnOvsdbEntry *GetVnEntry() const; private: + void OnVrfDelete(); + void EvaluateVrfDependency(VrfEntry *vrf); + friend class MulticastMacLocalOvsdb; std::string logical_switch_name_; // take reference to the vrf while exporting route, to assure sanity diff --git a/src/vnsw/agent/ovs_tor_agent/ovsdb_client/test/test_ovs_multicast_local.cc b/src/vnsw/agent/ovs_tor_agent/ovsdb_client/test/test_ovs_multicast_local.cc index 6d021750d5f..ad3708c2c92 100644 --- a/src/vnsw/agent/ovs_tor_agent/ovsdb_client/test/test_ovs_multicast_local.cc +++ b/src/vnsw/agent/ovs_tor_agent/ovsdb_client/test/test_ovs_multicast_local.cc @@ -157,6 +157,94 @@ TEST_F(OvsBaseTest, MulticastLocal_add_mcroute_without_vrf_vn_link_present) { WAIT_FOR(1000, 10000, (VnGet(1) == NULL)); } +TEST_F(OvsBaseTest, MulticastLocal_on_del_vrf_del_mcast) { + //Add vrf + VrfAddReq("vrf1"); + WAIT_FOR(100, 10000, (VrfGet("vrf1", false) != NULL)); + //Add VN + VnAddReq(1, "vn1", "vrf1"); + WAIT_FOR(100, 10000, (VnGet(1) != NULL)); + //Add device + DBRequest device_req(DBRequest::DB_ENTRY_ADD_CHANGE); + device_req.key.reset(new PhysicalDeviceKey(MakeUuid(1))); + device_req.data.reset(new PhysicalDeviceData(agent_, "test-router", + "test-router", "", + Ip4Address::from_string("1.1.1.1"), + Ip4Address::from_string("2.2.2.2"), + "OVS", NULL)); + agent_->physical_device_table()->Enqueue(&device_req); + WAIT_FOR(100, 10000, + (agent_->physical_device_table()->Find(MakeUuid(1)) != NULL)); + //Add device_vn + AddPhysicalDeviceVn(agent_, 1, 1, true); + + //Delete + VrfDelReq("vrf1"); + //To remove vrf from VN, add another non-existent vrf. + VnAddReq(1, "vn1", "vrf2"); + //Since VRF is gone from VN even though VN is not gone, mcast entry shud be + //gone. + WAIT_FOR(1000, 10000, (L2RouteGet("vrf1", MacAddress::BroadcastMac()) == NULL)); + + //Delete + DelPhysicalDeviceVn(agent_, 1, 1, true); + DBRequest del_dev_req(DBRequest::DB_ENTRY_DELETE); + del_dev_req.key.reset(new PhysicalDeviceVnKey(MakeUuid(1), + MakeUuid(1))); + agent_->physical_device_table()->Enqueue(&del_dev_req); + WAIT_FOR(1000, 10000, + (agent_->physical_device_table()-> + Find(MakeUuid(1)) == NULL)); + VnDelReq(1); + WAIT_FOR(1000, 10000, (VrfGet("vrf1", true) == NULL)); + WAIT_FOR(1000, 10000, (VnGet(1) == NULL)); +} + +TEST_F(OvsBaseTest, MulticastLocal_on_del_vrf_vn_link) { + //Add vrf + VrfAddReq("vrf1"); + WAIT_FOR(100, 10000, (VrfGet("vrf1", false) != NULL)); + //Add VN + VnAddReq(1, "vn1", "vrf1"); + WAIT_FOR(100, 10000, (VnGet(1) != NULL)); + //Add device + DBRequest device_req(DBRequest::DB_ENTRY_ADD_CHANGE); + device_req.key.reset(new PhysicalDeviceKey(MakeUuid(1))); + device_req.data.reset(new PhysicalDeviceData(agent_, "test-router", + "test-router", "", + Ip4Address::from_string("1.1.1.1"), + Ip4Address::from_string("2.2.2.2"), + "OVS", NULL)); + agent_->physical_device_table()->Enqueue(&device_req); + WAIT_FOR(100, 10000, + (agent_->physical_device_table()->Find(MakeUuid(1)) != NULL)); + //Add device_vn + AddPhysicalDeviceVn(agent_, 1, 1, true); + + //To remove vrf from VN, add another non-existent vrf. + VnAddReq(1, "vn1", "vrf2"); + //Since VRF is gone from VN even though VN is not gone, mcast entry shud be + //gone. + WAIT_FOR(1000, 10000, (L2RouteGet("vrf1", MacAddress::BroadcastMac()) == NULL)); + //Add back the VRF + VnAddReq(1, "vn1", "vrf1"); + WAIT_FOR(1000, 10000, (L2RouteGet("vrf1", MacAddress::BroadcastMac()) != NULL)); + + //Delete + DelPhysicalDeviceVn(agent_, 1, 1, true); + DBRequest del_dev_req(DBRequest::DB_ENTRY_DELETE); + del_dev_req.key.reset(new PhysicalDeviceVnKey(MakeUuid(1), + MakeUuid(1))); + agent_->physical_device_table()->Enqueue(&del_dev_req); + WAIT_FOR(1000, 10000, + (agent_->physical_device_table()-> + Find(MakeUuid(1)) == NULL)); + VrfDelReq("vrf1"); + VnDelReq(1); + WAIT_FOR(1000, 10000, (VrfGet("vrf1", true) == NULL)); + WAIT_FOR(1000, 10000, (VnGet(1) == NULL)); +} + int main(int argc, char *argv[]) { GETUSERARGS(); // override with true to initialize ovsdb server and client diff --git a/src/vnsw/agent/ovs_tor_agent/ovsdb_client/vn_ovsdb.cc b/src/vnsw/agent/ovs_tor_agent/ovsdb_client/vn_ovsdb.cc index 99025af995f..65580e4f1e5 100644 --- a/src/vnsw/agent/ovs_tor_agent/ovsdb_client/vn_ovsdb.cc +++ b/src/vnsw/agent/ovs_tor_agent/ovsdb_client/vn_ovsdb.cc @@ -12,6 +12,7 @@ extern "C" { #include #include +#include #include using OVSDB::OvsdbDBEntry; @@ -48,6 +49,11 @@ void VnOvsdbEntry::DeleteMsg(struct ovsdb_idl_txn *txn) { // on delete of this entry trigger Vrf re-eval for entries in // Unicast Mac Local Table uc_obj->VrfReEvalEnqueue(vrf_.get()); + //For multicast vrf delete needs to known for deleting + //route. + MulticastMacLocalOvsdb *mc_obj = + table_->client_idl()->multicast_mac_local_ovsdb(); + mc_obj->VrfReEvalEnqueue(vrf_.get()); vrf_ = NULL; } diff --git a/src/vnsw/agent/test/test_agent_route_walker.cc b/src/vnsw/agent/test/test_agent_route_walker.cc index 077c19b7857..f043db493ee 100644 --- a/src/vnsw/agent/test/test_agent_route_walker.cc +++ b/src/vnsw/agent/test/test_agent_route_walker.cc @@ -166,6 +166,7 @@ class AgentRouteWalkerTest : public AgentRouteWalker, public ::testing::Test { //2.2.2.20/32; 255.255.255.255; 0:0:2:2:2:20; ff:ff:ff:ff:ff:ff route_notifications_++; route_table_walk_started_ = true; + assert(AreAllWalksDone() == false); return true; } @@ -178,6 +179,7 @@ class AgentRouteWalkerTest : public AgentRouteWalker, public ::testing::Test { vrf_notifications_++; VrfEntry *vrf = static_cast(e); StartRouteWalk(vrf); + assert(AreAllWalksDone() == false); return true; } diff --git a/src/vnsw/agent/test/test_l2route.cc b/src/vnsw/agent/test/test_l2route.cc index 05e86739d10..27fa0410dbf 100644 --- a/src/vnsw/agent/test/test_l2route.cc +++ b/src/vnsw/agent/test/test_l2route.cc @@ -615,6 +615,41 @@ TEST_F(RouteTest, vxlan_network_id_change_for_non_l2_interface) { client->WaitForIdle(); } +TEST_F(RouteTest, RemoteVxlanEncapRoute_1) { + client->Reset(); + VxLanNetworkIdentifierMode(false); + client->WaitForIdle(); + RouteTest::SetTunnelType(TunnelType::MPLS_GRE); + DelEncapList(); + client->WaitForIdle(); + + TunnelType::TypeBmap bmap = TunnelType::VxlanType(); + AddRemoteVmRoute(remote_vm_mac_, remote_vm_ip4_, server1_ip_, + MplsTable::kStartLabel, bmap); + WAIT_FOR(1000, 100, + (L2RouteFind(vrf_name_, remote_vm_mac_, remote_vm_ip4_) == true)); + + BridgeRouteEntry *rt = L2RouteGet(vrf_name_, remote_vm_mac_, + remote_vm_ip4_); + const AgentPath *path = rt->GetActivePath(); + EXPECT_TRUE(path->tunnel_type() != TunnelType::VXLAN); + + //Update VXLAN in encap prio + AddEncapList("MPLSoGRE", "MPLSoUDP", "VXLAN"); + client->WaitForIdle(); + path = rt->GetActivePath(); + EXPECT_TRUE(path->tunnel_type() == TunnelType::VXLAN); + const TunnelNH *nh = + static_cast(path->nexthop()); + EXPECT_TRUE(nh->GetDip()->to_string() == server1_ip_.to_string()); + + DeleteRoute(agent_->local_peer(), vrf_name_, remote_vm_mac_, + remote_vm_ip4_); + client->WaitForIdle(); + DelEncapList(); + client->WaitForIdle(); +} + TEST_F(RouteTest, Enqueue_l2_route_add_on_deleted_vrf) { struct PortInfo input[] = { {"vnet1", 1, "1.1.1.10", "00:00:00:01:01:01", 1, 1}, diff --git a/src/vnsw/agent/test/test_route.cc b/src/vnsw/agent/test/test_route.cc index c32f13326b3..0713b5581a2 100644 --- a/src/vnsw/agent/test/test_route.cc +++ b/src/vnsw/agent/test/test_route.cc @@ -2122,6 +2122,43 @@ TEST_F(RouteTest, ArpRouteDelete) { EXPECT_FALSE(FindNH(&key)); } +TEST_F(RouteTest, verify_channel_delete_results_in_path_delete) { + client->Reset(); + struct PortInfo input[] = { + {"vnet1", 1, "1.1.1.1", "00:00:00:01:01:01", 1, 1}, + }; + + IpamInfo ipam_info[] = { + {"1.1.1.0", 24, "1.1.1.200", true}, + }; + client->Reset(); + CreateVmportEnv(input, 1, 0); + client->WaitForIdle(); + AddIPAM("vn1", ipam_info, 1); + client->WaitForIdle(); + + BgpPeer *peer = CreateBgpPeer("127.0.0.1", "remote"); + FillEvpnNextHop(peer, "vrf1", 1000, TunnelType::MplsType()); + client->WaitForIdle(); + //Get Channel and delete it. + AgentXmppChannel *ch = peer->GetBgpXmppPeer(); + XmppChannelMock *xmpp_channel = static_cast + (ch->GetXmppChannel()); + delete ch; + delete xmpp_channel; + client->WaitForIdle(); + + client->Reset(); + DelIPAM("vn1"); + client->WaitForIdle(); + FlushEvpnNextHop(peer, "vrf1", 0); + DeleteVmportEnv(input, 1, 1, 0); + client->WaitForIdle(); + DeleteBgpPeer(NULL); + client->WaitForIdle(); +} + + int main(int argc, char *argv[]) { ::testing::InitGoogleTest(&argc, argv); GETUSERARGS(); diff --git a/src/vnsw/agent/test/test_util.cc b/src/vnsw/agent/test/test_util.cc index bc347174306..d9e3eef7438 100644 --- a/src/vnsw/agent/test/test_util.cc +++ b/src/vnsw/agent/test/test_util.cc @@ -3165,11 +3165,12 @@ BgpPeer *CreateBgpPeer(const Ip4Address &addr, std::string name) { void DeleteBgpPeer(Peer *peer) { BgpPeer *bgp_peer = static_cast(peer); - if (!bgp_peer) - return; - AgentXmppChannel *channel = bgp_peer->GetBgpXmppPeer(); - AgentXmppChannel::HandleAgentXmppClientChannelEvent(channel, xmps::NOT_READY); + AgentXmppChannel *channel = NULL; + if (bgp_peer) { + channel = bgp_peer->GetBgpXmppPeer(); + AgentXmppChannel::HandleAgentXmppClientChannelEvent(channel, xmps::NOT_READY); + } client->WaitForIdle(); TaskScheduler::GetInstance()->Stop(); Agent::GetInstance()->controller()->unicast_cleanup_timer().cleanup_timer_-> @@ -3182,10 +3183,12 @@ void DeleteBgpPeer(Peer *peer) { client->WaitForIdle(); Agent::GetInstance()->controller()->Cleanup(); client->WaitForIdle(); - XmppChannelMock *xmpp_channel = static_cast - (channel->GetXmppChannel()); - delete channel; - delete xmpp_channel; + if (channel) { + XmppChannelMock *xmpp_channel = static_cast + (channel->GetXmppChannel()); + delete channel; + delete xmpp_channel; + } } void FillEvpnNextHop(BgpPeer *peer, std::string vrf_name,